⚙ Cormsor AI← Home

Content Policy & Safety Measures

The controls Cormsor AI applies for user safety, content quality, and regulatory compliance — a transparent document.

1. Service scope

Cormsor AI provides a general-purpose AI assistant service to English-speaking users worldwide, with an initial focus on anglophone Africa (Nigeria, Kenya, South Africa, Ghana, and beyond):

⚠ Cormsor AI provides information; it does not replace your professional decisions. For personal medical, legal, or financial decisions, consult a qualified professional.

2. Helpfulness principle

Cormsor is a general-purpose assistant. The user's right to accurate and useful information comes first; unnecessary refusals push users toward worse sources of information.

3. Refused content (closed list)

Only requests that produce concrete harm are refused:

CategoryExample
🚫 Weapons / explosives / drug manufacture instructions"How do I build a bomb", "meth synthesis route"
👶 Child sexual abuse materialAny sexual content involving minors — automatic refusal + log
🪪 Impersonation / defamationCloning a specific real person's voice or face; fabricated quotes / defamatory content
💀 Suicide-method guidesInstructions for a specific suicide method — replaced with empathy + crisis-line referrals
🕵️ Fraud / attack toolingCard-skimming code, phishing kits, ransomware

3.1 Technical implementation

Two-layer architecture:

3.3 Image / music / video generation safety

4. Layered content safety (synthetic-by-design architecture)

Cormsor AI is designed around the principle of preventing user harm at the source. The service structurally does not accept the inputs needed to create person-specific content. This architectural decision reduces deepfake and impersonation risk to near-zero through technical infrastructure rather than contractual promises.

4.1 Inputs that are not accepted

4.2 Output marking

4.3 Provider-level extra protections

Cormsor AI relies on the following additional safety layers during generation:

4.4 What this approach means

Cormsor AI is not a "user upload + transformation" environment but a "synthetic AI generation" environment. The user cannot impersonate a real individual's face or voice through the service — because the service does not accept those inputs. As a result, deepfake creation, identity theft, celebrity impersonation, and similar high-risk scenarios are blocked by design, with no need for an additional review mechanism.

5. User policy (reference to Terms)

Every user accepts the Terms of Service at sign-up. The Terms explicitly state:

6. Logging and reporting

Cormsor AI manages its service logs on the following principles:

7. Marketing and use limits

Cormsor AI product policy and marketing approach:

8. Complaints and reporting

If you suspect a violation: [email protected]. We respond within 48 hours. For urgent cases (suspected CSAM, etc.): use the same address with the subject line "URGENT".

9. Updates

The version date is updated whenever this policy changes. Material changes are communicated to users by email.

Last updated: 2026-04-27 · Cormsor LLC, Wyoming USA