Azure AI Content Safety SDK for JavaScript

Introduction to Azure AI Content Safety SDK

The Azure AI Content Safety SDK for JavaScript empowers developers to build applications that can detect and mitigate harmful content. Leverage powerful AI models to analyze text and images for various categories of risks, ensuring a safer online environment for your users.

This SDK integrates seamlessly with Azure Content Safety, a cloud-based service designed to identify and flag potentially offensive, hateful, or unsafe content. Whether you're moderating user-generated content, ensuring brand safety, or complying with regulatory requirements, this SDK provides the tools you need.

Key Features:

Installation

Install the Azure AI Content Safety SDK for JavaScript using npm or yarn:

npm install @azure/cognitiveservices-content-safety
yarn add @azure/cognitiveservices-content-safety

Getting Started

To start using the Content Safety client, you'll need an Azure subscription and a Content Safety resource. You can create these resources through the Azure portal.

Here's a basic example of how to initialize the client and analyze text:


import { TextAnalysisClient, AnalyzeTextRequest, SeverityLevel } from "@azure/cognitiveservices-content-safety";
import { TextAnalysisFromText } from "@azure/cognitiveservices-content-safety/models";

async function analyzeText(endpoint, apiKey, textToAnalyze) {
    const client = new TextAnalysisClient(endpoint, apiKey);

    const textAnalyzeRequest: AnalyzeTextRequest = {
        text: textToAnalyze,
        categories: ["Hate", "SelfHarm", "Sexual", "Violence"],
        severityLevel: SeverityLevel.High
    };

    const result = await client.analyzeText(textAnalyzeRequest);

    console.log(JSON.stringify(result, null, 2));
}

// Replace with your actual endpoint and API key
const CONTENT_SAFETY_ENDPOINT = "YOUR_CONTENT_SAFETY_ENDPOINT";
const CONTENT_SAFETY_API_KEY = "YOUR_CONTENT_SAFETY_API_KEY";
const MY_TEXT = "This is a sample text that may contain harmful content.";

analyzeText(CONTENT_SAFETY_ENDPOINT, CONTENT_SAFETY_API_KEY, MY_TEXT);
            

Core Concepts

API Reference

TextAnalysisClient

Represents the client for interacting with the Content Safety text analysis API.


class TextAnalysisClient {
    constructor(endpoint: string, apiKey: string);
    analyzeText(request: AnalyzeTextRequest): Promise<AnalyzeTextResponse>;
    analyzeImage(request: AnalyzeImageRequest): Promise<AnalyzeImageResponse>;
}
                

AnalyzeTextRequest

Defines the parameters for text analysis.


interface AnalyzeTextRequest {
    text: string;
    categories?: ("Hate" | "SelfHarm" | "Sexual" | "Violence")[];
    severityLevel?: SeverityLevel;
}
                

AnalyzeImageRequest

Defines the parameters for image analysis.


interface AnalyzeImageRequest {
    content: string; // Base64 encoded image
    content-type: "image/jpeg" | "image/png" | "image/gif" | "image/bmp";
    categories?: ("Hate" | "SelfHarm" | "Sexual" | "Violence")[];
    severityLevel?: SeverityLevel;
}
                

SeverityLevel Enum

Enumeration for severity levels.


enum SeverityLevel {
    VeryLow,
    Low,
    Medium,
    High
}
                

Examples

Analyzing Image Content

To analyze an image, you need to provide the image data as a base64 encoded string.


import { ImageAnalysisClient, AnalyzeImageRequest, SeverityLevel } from "@azure/cognitivesservices-content-safety";
import * as fs from 'fs';

async function analyzeImage(endpoint, apiKey, imagePath) {
    const client = new ImageAnalysisClient(endpoint, apiKey);

    const imageBuffer = fs.readFileSync(imagePath);
    const base64Image = imageBuffer.toString('base64');

    const imageAnalyzeRequest: AnalyzeImageRequest = {
        content: base64Image,
        'content-type': 'image/jpeg', // or 'image/png', etc.
        categories: ["Sexual", "Violence"],
        severityLevel: SeverityLevel.Medium
    };

    const result = await client.analyzeImage(imageAnalyzeRequest);

    console.log(JSON.stringify(result, null, 2));
}

// Replace with your actual endpoint, API key, and image path
const IMAGE_ENDPOINT = "YOUR_CONTENT_SAFETY_ENDPOINT";
const IMAGE_API_KEY = "YOUR_CONTENT_SAFETY_API_KEY";
const MY_IMAGE_PATH = "./path/to/your/image.jpg";

// analyzeImage(IMAGE_ENDPOINT, IMAGE_API_KEY, MY_IMAGE_PATH);
            

Advanced Usage

Explore more advanced features such as handling custom categories, configuring retry policies, and integrating with other Azure services. For detailed information and best practices, please refer to the official Azure Content Safety documentation.