[DOCS]

🚀 [01_QUICK_START]

Integrate content moderation in under 5 minutes

Step 1: Get Your API Key

Navigate to the Keys section in your dashboard and generate a new API key.

zod_••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••Login to reveal

Step 2: Make Your First Request

curl -X POST \
  https://zodiac-api-five.vercel.app/v1/check \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "Test message",
    "metadata": {
      "senderId": "user_123",
      "platform": "web"
    },
    "mode": "community"
  }'

Step 3: Handle the Response

{
  "isSafe": true,
  "reason": null,
  "reportId": null,
  "userRiskScore": 0.0
}

🔑 [02_AUTHENTICATION]

All API requests require authentication using Bearer tokens in the Authorization header.

Authorization: Bearer YOUR_API_KEY

⚠️ Security: Never expose your API key in client-side code. Always make requests from your backend server.

📦 [SDK_QUICK_START]

NEW

Use the official zodiac-guard npm package for a simpler, cleaner integration in under 60 seconds.

Step 1: Installation

npm install zodiac-guard

Step 2: Initialization

Import and initialize with your API key. Connects to the production API by default.

// For modern projects (ESM/TypeScript)
import Zodiac from "zodiac-guard";

// For legacy projects (CommonJS)
// const Zodiac = require("zodiac-guard");

const zodiac = new Zodiac(process.env.ZODIAC_API_KEY);

Step 3: Basic Usage

The check method is async and returns a full moderation report.

const result = await zodiac.check("fuck u", {
  mode: "community", // Modes: 'community', 'dating', 'kids', 'marketplace'
  metadata: {
    senderId: "user_8821", // REQUIRED for risk tracking
    platform: "web-app"
  }
});

console.log(result);
/* {
  isSafe: false,
  reason: 'Profanity',
  reportId: '1744aac6-27d9-4430-bb31-85c98ca45cdd',
  userRiskScore: 0.2
} */

Step 4: Advanced Options (Metadata & Modes)

Provide a senderId and choose a moderation mode for full risk tracking.

const options = {
  mode: "community", 
  // Options: 'community', 'dating', 
  //          'kids', 'marketplace'
  metadata: {
    senderId: "user_12345",
    platform: "web-chat"
  }
};

const result = await zodiac.check(
  "some potentially bad text", 
  options
);

console.log(result.isSafe);        // false
console.log(result.reason);        // "Profanity"
console.log(result.reportId);      // UUID for your logs
console.log(result.userRiskScore); // 0.0 – 1.0

🛡️ Error Handling & Reliability

The SDK uses fail-safe logic: if the moderation service is unreachable or credits run out, it defaults to isSafe: true so your app never crashes or blocks users due to a network hiccup.

const result = await zodiac.check("text");

if (result.error) {
  console.warn(
    "Moderation unavailable, defaulting to safe:", 
    result.error
  );
}
💡 The SDK handles retries, timeouts, and upstream errors automatically — no extra configuration needed.

📡 [03_API_ENDPOINTS]

POST /v1/check

Moderate content and receive instant safety analysis with user risk scoring.

Request Body

{
  "text": "string (REQUIRED)",
  "metadata": {
    "senderId": "string (REQUIRED)",
    "platform": "string (optional)",
    "chatRoomId": "string (optional)",
    "userId": "string (optional)"
  },
  "mode": "string (optional)"
}

Response (200 OK)

{
  "isSafe": true | false,
  "reason": "No policy violation",
  "reportId": "uuid-1234..." | null,
  "userRiskScore": 0.0 to 1.0
}
GET /v1/usage

Get your current usage statistics and remaining credits.

Response (200 OK)

{
  "remainingCredits": 896,
  "usedThisMonth": 104,
  "limit": 1000
}

🎯 [04_MODERATION_MODES]

Choose the moderation mode that best fits your use case.

🏘️ COMMUNITY (Default)

Standard toxicity, hate speech, and general harassment detection.

mode: "community"

💝 DATING

Allows flirting but blocks harassment and threats. Optimized for dating apps.

mode: "dating"

👶 KIDS

Zero-tolerance for profanity, violence, and mature themes.

mode: "kids"

🛒 MARKETPLACE

Scam detection, fake payment links, and suspicious requests.

mode: "marketplace"

📋 [RESPONSE_TYPES]

All responses from the API follow a consistent structure. Below are the possible response shapes.

✓ SAFE RESPONSE
{
  "isSafe": true,
  "reason": null,
  "reportId": null,
  "userRiskScore": 0.0
}
✗ UNSAFE RESPONSE
{
  "isSafe": false,
  "reason": "profanity detected",
  "reportId": "uuid-1234-5678-abcd",
  "userRiskScore": 0.65
}

Risk score escalates per sender with repeated unsafe content: 0.2 → 0.4 → 0.6 → 0.8 → 1.0

⚠ TIMEOUT RESPONSE
{
  "isSafe": true,
  "error": "Upstream timeout"
}

💻 [05_CODE_EXAMPLES]

Node.js / Express

const axios = require('axios');

app.post('/api/chat/send', async (req, res) => {
  const { userId, message, roomId } = req.body;
  
  try {
    const response = await axios.post(
      'https://zodiac-api-five.vercel.app/v1/check', {
      text: message,
      metadata: {
        senderId: userId,
        chatRoomId: roomId,
        platform: 'web'
      }
    }, {
      headers: {
        'Authorization': `Bearer ${process.env.ZODIAC_API_KEY}`
      }
    });
    
    if (!response.data.isSafe) {
      return res.status(400).json({
        error: 'Message blocked',
        reason: response.data.reason
      });
    }
    
    res.json({ success: true });
  } catch (error) {
    res.status(500).json({ error: 'Service unavailable' });
  }
});

💬 [CHAT_INTEGRATION]

Integrate moderation before saving or broadcasting each message. The safest pattern is: validate → moderate → block/allow → store.

Step 1: Call moderation before publish

app.post('/chat/send', async (req, res) => {
  const { userId, roomId, message } = req.body;

  const mod = await axios.post(
    'https://zodiac-api-five.vercel.app/v1/check', {
    text: message,
    metadata: {
      senderId: userId,
      chatRoomId: roomId,
      platform: 'web',
      type: 'chat_message'
    },
    mode: 'community'
  }, {
    headers: { 
      Authorization: `Bearer ${process.env.ZODIAC_API_KEY}` 
    }
  });

  if (!mod.data.isSafe) {
    return res.status(400).json({
      blocked: true,
      reason: mod.data.reason,
      userRiskScore: mod.data.userRiskScore,
      reportId: mod.data.reportId
    });
  }

  res.json({ blocked: false, userRiskScore: mod.data.userRiskScore });
});

Step 2: Apply progressive chat actions by risk

function chatActionByRisk(score) {
  if (score >= 0.8) return 'temp-ban';
  if (score >= 0.6) return 'mute-10m';
  if (score >= 0.4) return 'slow-mode';
  return 'warn';
}

🗨️ [COMMENTS_INTEGRATION]

Use the same endpoint to scan comments before publish. Keep `senderId` stable per account so risk history remains accurate.

Step 1: Moderate comment on submit

app.post('/posts/:postId/comments', async (req, res) => {
  const { postId } = req.params;
  const { userId, content } = req.body;

  const mod = await axios.post(
    'https://zodiac-api-five.vercel.app/v1/check', {
    text: content,
    metadata: {
      senderId: userId,
      postId,
      platform: 'web',
      type: 'blog_comment'
    },
    mode: 'community'
  }, {
    headers: { 
      Authorization: `Bearer ${process.env.ZODIAC_API_KEY}` 
    }
  });

  if (!mod.data.isSafe) {
    return res.status(400).json({
      error: 'Comment blocked',
      reason: mod.data.reason,
      userRiskScore: mod.data.userRiskScore,
      reportId: mod.data.reportId
    });
  }

  res.status(201).json({ success: true, userRiskScore: mod.data.userRiskScore });
});

Step 2: Queue risky-but-safe comments for review

if (mod.data.isSafe && mod.data.userRiskScore >= 0.6) {
  await queueForReview({ 
    postId, userId, content, 
    risk: mod.data.userRiskScore 
  });
}

💬 [SDK_CHAT_INTEGRATION]

NEW

Same chat integration using the zodiac-guard. Less boilerplate, built-in error handling, and automatic fail-safe behavior.

Step 1: Initialize the SDK

// For modern projects (ESM/TypeScript)
import Zodiac from "zodiac-guard";

// For legacy projects (CommonJS)
// const Zodiac = require("zodiac-guard");

const zodiac = new Zodiac(process.env.ZODIAC_API_KEY);

Step 2: Moderate before sending

app.post('/chat/send', async (req, res) => {
  const { userId, roomId, message } = req.body;

  const result = await zodiac.check(message, {
    mode: 'community',
    metadata: {
      senderId: userId,
      chatRoomId: roomId,
      platform: 'web'
    }
  });

  if (!result.isSafe) {
    return res.status(400).json({
      blocked: true,
      reason: result.reason,
      userRiskScore: result.userRiskScore,
      reportId: result.reportId
    });
  }

  // Save & broadcast message here...
  res.json({ blocked: false, userRiskScore: result.userRiskScore });
});

Step 3: Apply progressive actions by risk score

function chatActionByRisk(score) {
  if (score >= 0.8) return 'temp-ban';
  if (score >= 0.6) return 'mute-10m';
  if (score >= 0.4) return 'slow-mode';
  return 'warn';
}

const action = chatActionByRisk(result.userRiskScore);
await applyAction(userId, action);
💡 The SDK's fail-safe ensures your chat never breaks if moderation is temporarily unavailable — it defaults to isSafe: true and sets result.error for logging.

🗨️ [SDK_COMMENTS_INTEGRATION]

NEW

Moderate blog comments, forum posts, or product reviews using the SDK. Keep senderId stable per account to maintain accurate risk history.

Step 1: Moderate comment on submit

// For modern projects (ESM/TypeScript)
import Zodiac from "zodiac-guard";

// For legacy projects (CommonJS)
// const Zodiac = require("zodiac-guard");

const zodiac = new Zodiac(process.env.ZODIAC_API_KEY);

app.post('/posts/:postId/comments', async (req, res) => {
  const { postId } = req.params;
  const { userId, content } = req.body;

  const result = await zodiac.check(content, {
    mode: 'community',
    metadata: { senderId: userId, postId, platform: 'web' }
  });

  if (!result.isSafe) {
    return res.status(400).json({
      error: 'Comment blocked',
      reason: result.reason,
      userRiskScore: result.userRiskScore,
      reportId: result.reportId
    });
  }

  // Save the comment to your database...
  res.status(201).json({ success: true, userRiskScore: result.userRiskScore });
});

Step 2: Queue risky-but-safe comments for review

if (result.isSafe && result.userRiskScore >= 0.6) {
  await queueForReview({ 
    postId, userId, content, 
    risk: result.userRiskScore 
  });
  return res.status(202).json({ 
    queued: true, message: 'Comment pending review' 
  });
}

Step 3: Handle SDK errors gracefully

// The SDK never throws — check result.error
if (result.error) {
  console.warn('[zodiac] Skipped moderation:', result.error);
  // Optionally log & proceed, or queue for manual review
}
💡 Always store the result.reportId alongside flagged content in your database — it lets you trace and audit moderation decisions later.

📊 [06_USER_RISK_SCORING]

Every sender has a persistent risk profile per developer account. On each unsafe message, violation count increments and score is recalculated.

riskScore = min(violationCount × 0.2, 1.0)

1 violation  -> 0.2
2 violations -> 0.4
3 violations -> 0.6
4 violations -> 0.8
5+           -> 1.0
0.0 – 0.3
Low Risk

1 violation

0.4 – 0.7
Medium Risk

2–3 violations

0.8 – 1.0
Critical Risk

4+ violations

Important: use a consistent `metadata.senderId` for each user. Changing sender IDs resets effective history for that user.

⚠️ [07_ERROR_HANDLING]

Implement robust error handling to ensure your application continues functioning.

If the moderation service times out, the API returns:

{
  "isSafe": true,
  "error": "Upstream timeout"
}

✅ [08_BEST_PRACTICES]

Always Include senderId

The metadata.senderId field is REQUIRED for user tracking and risk scoring.

Never Expose API Keys

Always make moderation requests from your backend server.

Store reportId

When content is flagged, save the reportId for tracking and analytics.

Use the SDK for Simpler Integration

The zodiac-guard handles retries, timeouts, and fail-safe defaults — recommended over raw HTTP calls.

⚡ [PERFORMANCE_TARGETS]

Response Time (p95)
<300ms
Throughput
1000+ req/s
Uptime
99.9%