VideoCensorVideoCensor
HomeToolsAboutPricingBlogAPI
Sign In
VideoCensorVideoCensor

Remove profanity from video — fast and automatic

Product

  • Remove profanity
  • Pricing
  • For Business

Developers

  • API Dashboard
  • Documentation
  • Content Moderation API
  • Python SDK
  • Node.js SDK

Resources

  • About
  • FAQ
  • Blog

Company

  • Support us
  • Privacy
  • Terms

Contacts

  • support@videocensor.net

Tools

  • Profanity Checker
  • YouTube Moderation
  • Video Subtitles
  • Extract Audio
  • YouTube Chapters
  • Beep Sound
Files are automatically deleted
No registration for basic mode
Secure processing

N. A. Dzhumaev, TIN 645504695070, self-employed (NPD) · © 2026 VideoCensor. All rights reserved.

Profanity Detection API

REST API to detect profanity in text, audio, and video. Morphological analyzer catches swear words in all word forms, cases, and intentional obfuscation — via a single HTTP request.

Try freeOpen docs

Morphological analysis

500+ roots + lemmatization. Catches 'fuck', 'fucking', 'motherfuckers' — all forms, all conjugations, all prefixes.

Obfuscation-resistant

Detects intentional distortions: 'f*ck', 'fuuuck', 'f_ck', leet-speak. Custom dictionaries for your specific cases.

Russian + English

Models for both languages, language=ru|en parameter. Auto-detect on /analyze/media via Deepgram.

Text, audio and video

Text — POST /analyze/text up to 50,000 chars. Audio/video — POST /analyze/media, /analyze/media-url, upload file or YouTube/VK/RuTube link.

Word positions in response

Response contains a words array with text, isProfane, startMs, endMs. Perfect for UI highlighting, subtitles, manual moderation.

High accuracy

Standard mode — 97%+ precision. Precise mode — 99%+ via two AI providers in parallel. Enhanced mode — for songs, separates vocals via Demucs.

Code examples

Python — text
import os
from videocensor import VideoCensor

client = VideoCensor(api_key=os.environ["VIDEOCENSOR_API_KEY"])
result = client.analyze_text("check this text for profanity")
# result: AnalyzeTextResult(flagged_count=..., categories=...)
Node.js — text
import { VideoCensor } from "@videocensor/sdk";

const client = new VideoCensor({ apiKey: process.env.VIDEOCENSOR_API_KEY! });
const result = await client.analyzeText("check this text for profanity");
// result: { flaggedCount: ..., categories: [...] }
cURL — text
curl -X POST https://videocensor.ru/api/v1/analyze/text \
  -H "X-API-Key: $VIDEOCENSOR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"text":"test","language":"en"}'
Python — video
import os
from videocensor import VideoCensor

client = VideoCensor(api_key=os.environ["VIDEOCENSOR_API_KEY"])
result = client.analyze_media("/path/to/video.mp4", language="en")
# result: AnalyzeTextResult(flagged_count=..., categories=...)

Where it's used

  • UGC moderation

    Check posts, comments, usernames before publication.

  • YouTube creators

    Pre-upload check — prevent demonetization from profanity.

  • Podcast production

    Auto-check episodes before publishing, generate profanity timestamps.

  • Enterprise recordings

    Call recordings, webinars, internal meetings compliance moderation.

  • EdTech platforms

    UGC filtering for kids and educational apps.

  • Content studios

    Scripts, voiceover, pre-release QA.

Profanity Detection — FAQ

How is this different from bad-words npm packages?+

Packages like bad-words or profanity-filter are static word lists — they miss inflections, word forms, obfuscation. Our API uses morphological lemmatization + 500+ roots + AI models, works on text, audio and video.

Which languages are supported?+

Fully — Russian and English. Others are on the roadmap. Parameter language=ru|en, auto-detect for media.

How are obfuscated words handled?+

Pre-normalization: digit-to-letter mapping (1→i, 3→e, 0→o), repeated character stripping, in-word punctuation removal. Then morphological parsing and root lookup.

Can I add custom words?+

Yes. The customDictionary parameter accepts an array of your words — they're lemmatized and checked alongside the base dictionary. Use the whitelist parameter for exclusions.

Are there false positives?+

Rarely. Use the whitelist parameter for specific words or lower the preset to mild for reduced sensitivity.

What are the response times?+

Text — 50-200 ms. Audio/video — depends on duration: 1 minute in standard ≈ 8s processing, precise ≈ 15s, enhanced ≈ 60s (Demucs). Use webhooks for long files.

Detect profanity via API in 30 seconds

100 credits/month on Free. pip install videocensor, import, call — done.

Try freeOpen docs