Content Policy & Safety Measures
The controls Cormsor AI applies for user safety, content quality, and regulatory compliance — a transparent document.
1. Service scope
Cormsor AI provides a general-purpose AI assistant service to English-speaking users worldwide, with an initial focus on anglophone Africa (Nigeria, Kenya, South Africa, Ghana, and beyond):
- Writing, summarisation, translation, ideation
- Code generation, debugging, technical explanation
- Web search and synthesis of fresh information
- Image generation
- Music generation
- Short video generation
2. Helpfulness principle
Cormsor is a general-purpose assistant. The user's right to accurate and useful information comes first; unnecessary refusals push users toward worse sources of information.
- It answers general-knowledge questions — health, law, finance, science, history, technology, and more.
- It refers users to professionals — for personal diagnoses, predictions of case outcomes, or specific investment recommendations, the user is informed of the situation and referred to a qualified expert for any concrete personal decision.
- It shares life-saving emergency information — infant safety, first aid, suicide-crisis situations: International Association for Suicide Prevention (iasp.info/resources/Crisis_Centres); Nigeria: 0806 210 6493; Kenya: 1199; South Africa: 0800 567 567; UK: 116 123; US: 988.
3. Refused content (closed list)
Only requests that produce concrete harm are refused:
| Category | Example |
|---|---|
| 🚫 Weapons / explosives / drug manufacture instructions | "How do I build a bomb", "meth synthesis route" |
| 👶 Child sexual abuse material | Any sexual content involving minors — automatic refusal + log |
| 🪪 Impersonation / defamation | Cloning a specific real person's voice or face; fabricated quotes / defamatory content |
| 💀 Suicide-method guides | Instructions for a specific suicide method — replaced with empathy + crisis-line referrals |
| 🕵️ Fraud / attack tooling | Card-skimming code, phishing kits, ransomware |
3.1 Technical implementation
Two-layer architecture:
- Pre-filter: word-based instant refusal for the specific harmful categories above (the LLM is not invoked).
- Assistant instructions: the model's system prompt encodes Anthropic-style helpful + harmless principles; nuanced questions are evaluated in context by the assistant itself.
3.3 Image / music / video generation safety
- Image: NSFW content, extreme violence, and impersonation of real celebrities are blocked at generation time.
- Music: copyright-infringement and harmful-content filters are applied; vocals that imitate the voice of a specific existing artist are not supported.
- Video: requests for deepfake-style content resembling real individuals are refused as a matter of policy.
4. Layered content safety (synthetic-by-design architecture)
Cormsor AI is designed around the principle of preventing user harm at the source. The service structurally does not accept the inputs needed to create person-specific content. This architectural decision reduces deepfake and impersonation risk to near-zero through technical infrastructure rather than contractual promises.
4.1 Inputs that are not accepted
- No face-photo upload: the user cannot upload a real person's face and turn it into video or images. In every video pipeline, the "first frame" is exclusively a synthetic character generated by AI.
- No voice cloning: any audio file the user uploads is used only for automatic speech recognition (transcription). The user's own voice never appears in any generated output.
- No video-to-video conversion: there are no "video stylize" / "face swap" flows that accept an existing real video as input.
- Real names in text → celebrity impersonation: even if a real person's name is mentioned in a text prompt, the underlying AI providers' celebrity-protection filters return a synthetic character; outputs that imitate the requested person are not produced.
4.2 Output marking
- All generated content is marked with cormsor.ai (watermark / metadata); content is transparently identified as AI-generated.
- If users share generated content on social media, the share page makes it clear to viewers that the content was AI-generated.
4.3 Provider-level extra protections
Cormsor AI relies on the following additional safety layers during generation:
- The infrastructure providers used for image and video generation apply their own safety filters for celebrities and violent / harmful content; refusals are propagated to Cormsor.
- For audio generation, a pre-defined general-purpose voice pool is used; there is no flow that lets a user clone their own voice into the system.
- Reference images used for character consistency are regenerated synthetically by the Cormsor pipeline each time; the user cannot upload an external source.
4.4 What this approach means
Cormsor AI is not a "user upload + transformation" environment but a "synthetic AI generation" environment. The user cannot impersonate a real individual's face or voice through the service — because the service does not accept those inputs. As a result, deepfake creation, identity theft, celebrity impersonation, and similar high-risk scenarios are blocked by design, with no need for an additional review mechanism.
5. User policy (reference to Terms)
Every user accepts the Terms of Service at sign-up. The Terms explicitly state:
- "Cormsor AI does not provide medical, legal, or financial advice."
- "Full responsibility for decisions made on the basis of AI responses lies with the user."
- "The service may not be used for unlawful or harmful purposes."
- In the event of a breach, the Service Provider may suspend access without notice.
6. Logging and reporting
Cormsor AI manages its service logs on the following principles:
- Chat and generation logs are retained for a defined period for compliance with GDPR / NDPR / POPIA / DPA 2019; users may hide or request deletion of their own conversations at any time.
- There is no continuous active monitoring. Logs are inspected only on signals such as user complaints, lawful requests, or abnormal spikes in automatic refusals.
- Accounts found to be in breach of policy may have their access blocked; decisions are taken after manual review.
7. Marketing and use limits
Cormsor AI product policy and marketing approach:
- Medical, legal, and financial advice are not sold or marketed; the service does not provide them (automatic refusal).
- Adult content, drugs, weapons, and gambling are not offered in any form.
- No unsolicited promotion (spam / cold email) is used; growth is via organic channels and approved advertising platforms only.
- Bypass, proxy, or anti-blocking services are not provided.
- Payment flows are fully aligned with the Acceptable-Use Policies of our payment providers.
8. Complaints and reporting
If you suspect a violation: [email protected]. We respond within 48 hours. For urgent cases (suspected CSAM, etc.): use the same address with the subject line "URGENT".
9. Updates
The version date is updated whenever this policy changes. Material changes are communicated to users by email.