Integrate content moderation in under 5 minutes
All API requests require authentication using Bearer tokens in the Authorization header.
Authorization: Bearer YOUR_API_KEY
⚠️ Security: Never expose your API key in client-side code. Always make requests from your backend server.
Moderate content and receive instant safety analysis with user risk scoring.
{
"text": "string (REQUIRED)",
"metadata": {
"senderId": "string (REQUIRED)",
"platform": "string (optional)",
"chatRoomId": "string (optional)",
"userId": "string (optional)"
},
"mode": "string (optional)"
}{
"isSafe": true | false,
"reason": "No policy violation",
"reportId": "uuid-1234..." | null,
"userRiskScore": 0.0 to 1.0
}Get your current usage statistics and remaining credits.
{
"remainingCredits": 896,
"usedThisMonth": 104,
"limit": 1000
}Choose the moderation mode that best fits your use case.
Standard toxicity, hate speech, and general harassment detection.
mode: "community"Allows flirting but blocks harassment and threats. Optimized for dating apps.
mode: "dating"Zero-tolerance for profanity, violence, and mature themes.
mode: "kids"Scam detection, fake payment links, and suspicious requests.
mode: "marketplace"All responses from the API follow a consistent structure. Below are the possible response shapes.
{
"isSafe": true,
"reason": null,
"reportId": null,
"userRiskScore": 0.0
}{
"isSafe": false,
"reason": "profanity detected",
"reportId": "uuid-1234-5678-abcd",
"userRiskScore": 0.65
}Risk score escalates per sender with repeated unsafe content: 0.2 → 0.4 → 0.6 → 0.8 → 1.0
{
"isSafe": true,
"error": "Upstream timeout"
}Integrate moderation before saving or broadcasting each message. The safest pattern is: validate → moderate → block/allow → store.
Every sender has a persistent risk profile per developer account. On each unsafe message, violation count increments and score is recalculated.
riskScore = min( violationCount × 0.2, 1.0) 1 violation -> 0.2 2 violations -> 0.4 3 violations -> 0.6 4 violations -> 0.8 5+ -> 1.0
1 violation
2–3 violations
4+ violations
Important: use a consistent `metadata.senderId` for each user. Changing sender IDs resets effective history for that user.
Implement robust error handling to ensure your application continues functioning.
If the moderation service times out, the API returns:
{
"isSafe": true,
"error": "Upstream timeout"
}The metadata.senderId field is REQUIRED for user tracking and risk scoring.
Always make moderation requests from your backend server.
When content is flagged, save the reportId for tracking and analytics.
🗨️ [COMMENTS_INTEGRATION]
Use the same endpoint to scan comments before publish. Keep `senderId` stable per account so risk history remains accurate.
⚠️ LOGIN REQUIRED
Integration examples are available after you log in.
LOGIN NOW