Azure AI Vision Documentation

Content Moderation Quickstart

Quickstart: Detect offensive content

This guide will walk you through the steps to create a simple application that uses Azure AI Vision's content moderation capabilities to detect potentially offensive or inappropriate content in images.

Content moderation helps you build safer online experiences by automatically detecting adult, racy, or gory content in images.

Prerequisites

  • An Azure subscription. If you don't have one, create a free account.
  • A Vision resource or Multi-service resource.
  • An IDE (like Visual Studio Code) and a programming language environment (e.g., .NET, Python, Node.js).

Set up your Azure resource

You need to create an Azure AI Vision resource to access the content moderation API. You can do this through the Azure portal:

  1. Go to the Azure portal.
  2. Click Create a resource.
  3. Search for "Azure AI Vision" or "Multi-service AI".
  4. Select the service and click Create.
  5. Fill in the required details, including your subscription, resource group, region, and name.
  6. For pricing, choose a free tier (F0) if available for testing.
  7. Review and create the resource.

Once created, navigate to your resource and find your endpoint and key from the "Keys and Endpoint" section. You'll need these for authentication.

Create your project

Let's assume you're using Python. Create a new directory for your project and a Python file (e.g., moderate_image.py).

Install the Azure AI Vision SDK

Open your terminal or command prompt in your project directory and install the necessary SDK package:

bash
pip install azure-ai-vision-imageanalysis
                    

Code Example (Python)

Replace YOUR_VISION_ENDPOINT and YOUR_VISION_KEY with your actual credentials obtained from the Azure portal.

python
from azure.ai.vision.imageanalysis import ImageAnalysisClient
from azure.core.credentials import AzureKeyCredential
from dotenv import load_dotenv
import os

# Load environment variables from .env file if it exists
load_dotenv()

# --- Configuration ---
# Replace with your Azure AI Vision endpoint and key
# It's recommended to use environment variables for sensitive information
VISION_ENDPOINT = os.getenv("VISION_ENDPOINT", "YOUR_VISION_ENDPOINT")
VISION_KEY = os.getenv("VISION_KEY", "YOUR_VISION_KEY")

# Path to the image you want to analyze
IMAGE_PATH = "sample_image.jpg" # Make sure to have a sample image in your directory

# --- Initialize the client ---
def initialize_client():
    """Initializes the ImageAnalysisClient."""
    if not VISION_ENDPOINT or not VISION_KEY or VISION_ENDPOINT == "YOUR_VISION_ENDPOINT" or VISION_KEY == "YOUR_VISION_KEY":
        print("Error: Please set VISION_ENDPOINT and VISION_KEY environment variables or replace placeholders in the script.")
        return None
    
    try:
        credential = AzureKeyCredential(VISION_KEY)
        client = ImageAnalysisClient(endpoint=VISION_ENDPOINT, credential=credential)
        print("ImageAnalysisClient initialized successfully.")
        return client
    except Exception as e:
        print(f"Error initializing ImageAnalysisClient: {e}")
        return None

# --- Analyze image for content moderation ---
def analyze_image_content_moderation(client: ImageAnalysisClient, image_path: str):
    """Analyzes an image for potentially offensive content."""
    if not client:
        return

    print(f"Analyzing image: {image_path} for content moderation...")

    try:
        with open(image_path, "rb") as image_file:
            # Specify the features you want to analyze
            # For content moderation, we need 'content_safety'
            features = ["content_safety"] 
            
            result = client.analyze(
                image_data=image_file,
                visual_features=features
            )

            if result.content_safety:
                print("\n--- Content Safety Results ---")
                print(f"Adult classification: {result.content_safety.adult.classification_type}")
                print(f"  Score: {result.content_safety.adult.score:.4f}")
                print(f"Racy classification: {result.content_safety.racy.classification_type}")
                print(f"  Score: {result.content_safety.racy.score:.4f}")
                print(f"Gory classification: {result.content_safety.gory.classification_type}")
                print(f"  Score: {result.content_safety.gory.score:.4f}")
                print(f"Medical classification: {result.content_safety.medical.classification_type}")
                print(f"  Score: {result.content_safety.medical.score:.4f}")
                
                if result.content_safety.suggested_content_type:
                    print(f"\nSuggested Content Type: {result.content_safety.suggested_content_type}")

            else:
                print("No content safety information was returned.")

    except FileNotFoundError:
        print(f"Error: The image file '{image_path}' was not found.")
    except Exception as e:
        print(f"An error occurred during image analysis: {e}")

# --- Main execution ---
if __name__ == "__main__":
    # Create a dummy sample_image.jpg if it doesn't exist for demonstration purposes
    if not os.path.exists(IMAGE_PATH):
        print(f"'{IMAGE_PATH}' not found. Creating a blank placeholder image.")
        try:
            from PIL import Image, ImageDraw
            img = Image.new('RGB', (100, 100), color = 'red')
            d = ImageDraw.Draw(img)
            d.text((10,10), "Sample", fill=(255,255,0))
            img.save(IMAGE_PATH)
            print(f"Placeholder image '{IMAGE_PATH}' created. For accurate results, replace it with a real image.")
        except ImportError:
            print("Pillow library not found. Cannot create placeholder image. Please install it ('pip install Pillow') or provide your own image.")
        except Exception as e:
            print(f"Error creating placeholder image: {e}")

    print("Starting content moderation quickstart...")
    vision_client = initialize_client()
    if vision_client:
        analyze_image_content_moderation(vision_client, IMAGE_PATH)
    print("\nQuickstart finished.")

```
                    

To use environment variables:

  1. Install python-dotenv: pip install python-dotenv
  2. Create a file named .env in the same directory as your Python script.
  3. Add your credentials to the .env file:
    env
    VISION_ENDPOINT=YOUR_VISION_ENDPOINT
    VISION_KEY=YOUR_VISION_KEY
                                

Note: For production applications, consider more secure methods for managing secrets, such as Azure Key Vault.

Run the application

Save the code above as moderate_image.py. Make sure you have an image file named sample_image.jpg in the same directory, or update the IMAGE_PATH variable to point to your image.

Run the script from your terminal:

bash
python moderate_image.py
                    

The output will show the detected content safety classifications (adult, racy, gory, medical) along with their confidence scores.

A score of 0 indicates no presence, while a score close to 1 indicates a high likelihood of that content type.

Next Steps