[DOCS]

🚀 [01_QUICK_START]

Integrate content moderation in under 5 minutes

⚠️ LOGIN REQUIRED

To see your API key and get started, please log in to your account.

LOGIN NOW

🔑 [02_AUTHENTICATION]

All API requests require authentication using Bearer tokens in the Authorization header.

Authorization: Bearer YOUR_API_KEY

⚠️ Security: Never expose your API key in client-side code. Always make requests from your backend server.

📡 [03_API_ENDPOINTS]

POST /v1/check

Moderate content and receive instant safety analysis with user risk scoring.

Request Body

{
  "text": "string (REQUIRED)",
  "metadata": {
    "senderId": "string (REQUIRED)",
    "platform": "string (optional)",
    "chatRoomId": "string (optional)",
    "userId": "string (optional)"
  },
  "mode": "string (optional)"
}

Response (200 OK)

{
  "isSafe": true | false,
  "reason": "No policy violation",
  "reportId": "uuid-1234..." | null,
  "userRiskScore": 0.0 to 1.0
}
GET /v1/usage

Get your current usage statistics and remaining credits.

Response (200 OK)

{
  "remainingCredits": 896,
  "usedThisMonth": 104,
  "limit": 1000
}

🎯 [04_MODERATION_MODES]

Choose the moderation mode that best fits your use case.

🏘️ COMMUNITY (Default)

Standard toxicity, hate speech, and general harassment detection.

mode: "community"

💝 DATING

Allows flirting but blocks harassment and threats. Optimized for dating apps.

mode: "dating"

👶 KIDS

Zero-tolerance for profanity, violence, and mature themes.

mode: "kids"

🛒 MARKETPLACE

Scam detection, fake payment links, and suspicious requests.

mode: "marketplace"

📋 [RESPONSE_TYPES]

All responses from the API follow a consistent structure. Below are the possible response shapes.

✓ SAFE RESPONSE
{
  "isSafe": true,
  "reason": null,
  "reportId": null,
  "userRiskScore": 0.0
}
✗ UNSAFE RESPONSE
{
  "isSafe": false,
  "reason": "profanity detected",
  "reportId": "uuid-1234-5678-abcd",
  "userRiskScore": 0.65
}

Risk score escalates per sender with repeated unsafe content: 0.2 → 0.4 → 0.6 → 0.8 → 1.0

⚠ TIMEOUT RESPONSE
{
  "isSafe": true,
  "error": "Upstream timeout"
}

💻 [05_CODE_EXAMPLES]

⚠️ LOGIN REQUIRED

Code examples are available after you log in with your API credentials.

LOGIN NOW

💬 [CHAT_INTEGRATION]

Integrate moderation before saving or broadcasting each message. The safest pattern is: validate → moderate → block/allow → store.

⚠️ LOGIN REQUIRED

Integration examples are available after you log in.

LOGIN NOW

🗨️ [COMMENTS_INTEGRATION]

Use the same endpoint to scan comments before publish. Keep `senderId` stable per account so risk history remains accurate.

⚠️ LOGIN REQUIRED

Integration examples are available after you log in.

LOGIN NOW

📊 [06_USER_RISK_SCORING]

Every sender has a persistent risk profile per developer account. On each unsafe message, violation count increments and score is recalculated.

riskScore = min(
  violationCount × 0.2, 1.0)

1 violation  -> 0.2
2 violations -> 0.4
3 violations -> 0.6
4 violations -> 0.8
5+           -> 1.0
0.0 – 0.3
Low Risk

1 violation

0.4 – 0.7
Medium Risk

2–3 violations

0.8 – 1.0
Critical Risk

4+ violations

Important: use a consistent `metadata.senderId` for each user. Changing sender IDs resets effective history for that user.

⚠️ [07_ERROR_HANDLING]

Implement robust error handling to ensure your application continues functioning.

If the moderation service times out, the API returns:

{
  "isSafe": true,
  "error": "Upstream timeout"
}

✅ [08_BEST_PRACTICES]

Always Include senderId

The metadata.senderId field is REQUIRED for user tracking and risk scoring.

Never Expose API Keys

Always make moderation requests from your backend server.

Store reportId

When content is flagged, save the reportId for tracking and analytics.

⚡ [PERFORMANCE_TARGETS]

Response Time (p95)
<300ms
Throughput
1000+ req/s
Uptime
99.9%