This quickstart guide shows you how to get started with the Azure Content Moderator API to moderate text for potentially offensive content, profanity, and personally identifiable information (PII).
Prerequisites
Before you begin, make sure you have the following:
- An Azure subscription. If you don't have one, create a free account.
- A Content Moderator resource created in the Azure portal. For instructions, see Create a Content Moderator resource.
- The subscription key and endpoint for your Content Moderator resource.
- A development environment set up with a programming language of your choice (e.g., Python, C#, Node.js).
Step 1: Create a Content Moderator Resource
If you haven't already, create a Content Moderator resource in the Azure portal. Navigate to Create a resource, search for "Content Moderator", and follow the prompts to create your resource. Note down the Key and Endpoint from the resource's overview page.
Step 2: Install the SDK
You can interact with the Content Moderator API using REST calls, but using the official SDKs is often more convenient. Install the appropriate SDK for your development language.
Python
pip install azure-cognitiveservices-vision-contentmoderator
C#
dotnet add package Azure.CognitiveServices.Vision.ContentModerator
Node.js
npm install @azure/cognitiveservices-content-moderator
Step 3: Write the Code
Below are examples for moderating text using Python, C#, and Node.js.
Python Example
This example demonstrates how to detect moderate and review text.
from azure.cognitiveservices.content_moderator import ContentModeratorClient
from msrest.authentication import CognitiveServicesCredentials
# Replace with your key and endpoint
subscription_key = "YOUR_SUBSCRIPTION_KEY"
endpoint = "YOUR_ENDPOINT"
client = ContentModeratorClient(endpoint, CognitiveServicesCredentials(subscription_key))
text_to_moderate = "This is a sample text that might contain offensive language like hell or damn."
try:
response = client.text_moderation.analyze_text(
locale="en",
text_content=text_to_moderate,
classify_terminals=True
)
print("Text Moderation Results:")
if response.is_moderation_required:
print(f" Moderation required: {response.is_moderation_required}")
print(f" Profanity terms detected: {len(response.terms)} - {response.terms}")
print(f" Personal Identifiable Information (PII) detected: {response.pii}")
print(f" Category scores: {response.categories}")
else:
print(" No offensive content detected.")
except Exception as e:
print(f"An error occurred: {e}")
Note: Remember to replace YOUR_SUBSCRIPTION_KEY and YOUR_ENDPOINT with your actual credentials.
Step 4: Run the Application
Execute your code. The output will show whether the text was flagged for moderation, list any detected profanities, and provide PII detection information.
What's next?
Explore more advanced features of Content Moderator, such as image moderation, video moderation, and setting up review workflows. Visit the official documentation for comprehensive details.
Important Considerations
Always handle sensitive data responsibly and ensure compliance with privacy regulations. The Content Moderator API is a tool to assist in content moderation, not a definitive solution. Human review may still be necessary.