Profanity Detection API
REST API to detect profanity in text, audio, and video. Morphological analyzer catches swear words in all word forms, cases, and intentional obfuscation — via a single HTTP request.
Morphological analysis
500+ roots + lemmatization. Catches 'fuck', 'fucking', 'motherfuckers' — all forms, all conjugations, all prefixes.
Obfuscation-resistant
Detects intentional distortions: 'f*ck', 'fuuuck', 'f_ck', leet-speak. Custom dictionaries for your specific cases.
Russian + English
Models for both languages, language=ru|en parameter. Auto-detect on /analyze/media via Deepgram.
Text, audio and video
Text — POST /analyze/text up to 50,000 chars. Audio/video — POST /analyze/media, /analyze/media-url, upload file or YouTube/VK/RuTube link.
Word positions in response
Response contains a words array with text, isProfane, startMs, endMs. Perfect for UI highlighting, subtitles, manual moderation.
High accuracy
Standard mode — 97%+ precision. Precise mode — 99%+ via two AI providers in parallel. Enhanced mode — for songs, separates vocals via Demucs.
Code examples
import os
from videocensor import VideoCensor
client = VideoCensor(api_key=os.environ["VIDEOCENSOR_API_KEY"])
result = client.analyze_text("check this text for profanity")
# result: AnalyzeTextResult(flagged_count=..., categories=...)import { VideoCensor } from "@videocensor/sdk";
const client = new VideoCensor({ apiKey: process.env.VIDEOCENSOR_API_KEY! });
const result = await client.analyzeText("check this text for profanity");
// result: { flaggedCount: ..., categories: [...] }curl -X POST https://videocensor.ru/api/v1/analyze/text \
-H "X-API-Key: $VIDEOCENSOR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"text":"test","language":"en"}'import os
from videocensor import VideoCensor
client = VideoCensor(api_key=os.environ["VIDEOCENSOR_API_KEY"])
result = client.analyze_media("/path/to/video.mp4", language="en")
# result: AnalyzeTextResult(flagged_count=..., categories=...)Where it's used
UGC moderation
Check posts, comments, usernames before publication.
YouTube creators
Pre-upload check — prevent demonetization from profanity.
Podcast production
Auto-check episodes before publishing, generate profanity timestamps.
Enterprise recordings
Call recordings, webinars, internal meetings compliance moderation.
EdTech platforms
UGC filtering for kids and educational apps.
Content studios
Scripts, voiceover, pre-release QA.
Profanity Detection — FAQ
How is this different from bad-words npm packages?+
Packages like bad-words or profanity-filter are static word lists — they miss inflections, word forms, obfuscation. Our API uses morphological lemmatization + 500+ roots + AI models, works on text, audio and video.
Which languages are supported?+
Fully — Russian and English. Others are on the roadmap. Parameter language=ru|en, auto-detect for media.
How are obfuscated words handled?+
Pre-normalization: digit-to-letter mapping (1→i, 3→e, 0→o), repeated character stripping, in-word punctuation removal. Then morphological parsing and root lookup.
Can I add custom words?+
Yes. The customDictionary parameter accepts an array of your words — they're lemmatized and checked alongside the base dictionary. Use the whitelist parameter for exclusions.
Are there false positives?+
Rarely. Use the whitelist parameter for specific words or lower the preset to mild for reduced sensitivity.
What are the response times?+
Text — 50-200 ms. Audio/video — depends on duration: 1 minute in standard ≈ 8s processing, precise ≈ 15s, enhanced ≈ 60s (Demucs). Use webhooks for long files.
Detect profanity via API in 30 seconds
100 credits/month on Free. pip install videocensor, import, call — done.