6ix — Community Guidelines
Last updated: 2025-10-28
These Guidelines keep 6ix safe, welcoming, and creative across instant chat, DMs, voice/video rooms, live streams, VOD, and feeds. They work together with our Terms of Use, Privacy Policy, Safety & Minors, Copyright/DMCA, Ads Policy, Cookies Policy, and Cybercrimes Notice.
These rules apply everywhere on 6ix: profiles, bios, public posts, comments, DMs, groups, live chat, voice/video rooms, and replays. Local laws may require stricter standards; where they do, those prevail.
1) Our values
- Safety first. We protect users—especially minors—against harm, harassment, and exploitation.
- Creativity & culture. 6ix celebrates Fashion, Music, Education, Gaming, and positive community energy.
- Integrity & fairness. Be honest, disclose paid promotions, respect IP and privacy, and compete fairly.
- Inclusion. Everyone deserves dignity. Bigotry and dehumanization are not welcome.
2) Scope & where rules apply
These Guidelines apply to public and private spaces on 6ix (including DMs and private rooms), to user profiles and handles, to uploaded/streamed content, to comments and reactions, and to off-platform behavior that creates significant safety risk on 6ix (e.g., coordinated harassment brigades organized elsewhere).
3) Respect & harassment
- No targeted harassment or bullying. This includes slurs, degrading stereotypes, dogpiling, or humiliating edits.
- No stalking or intimidation. Don’t pursue users across threads/DMs after they ask you to stop.
- No sexual harassment. Unwanted sexual advances, sexualized insults, or coercion are prohibited.
- Critique is fine; attacks are not. Debate ideas; don’t demean people.
If you experience harassment, use Reporting & Safety tools and consider blocking or muting.
4) Hate & extremism
- No hateful conduct. Dehumanizing or violent content targeting protected classes is banned.
- No extremist praise. Support, praise, or recruitment for terrorist or violent extremist groups is prohibited.
- Context matters. News/education can discuss sensitive topics with neutral framing, no advocacy, and appropriate warnings.
5) Violence, threats & dangerous acts
- No credible threats, doxxing threats, or incitement to violence.
- No instructions to create weapons, explosives, or to commit serious illegal acts.
- No dangerous challenges or acts likely to cause injury.
- Self-harm and suicide content must be supportive, non-graphic, and include crisis resources; otherwise it is removed/restricted.
In emergencies, contact local services immediately. 6ix calling/video is not a substitute for emergency services.
6) Sexual content, consent & minors
- Zero tolerance for child sexual exploitation. We remove and report to authorities where required.
- Adult content rules. Sexual content may be age-gated/limited by region and law; non-consensual content is banned.
- Consent for recordings. Laws vary; if you record or enable VOD, disclose recording to participants.
See Safety & Minors and Terms of Use — Streaming.
7) Privacy, doxxing & personal data
- No posting others’ private info (home address, IDs, non-public phone/email) without explicit consent.
- Blur/redact sensitive details when sharing screenshots or video.
- Follow applicable recording consent laws; get permission where required.
- Stolen or hacked data, “revenge porn,” hidden cameras: strictly banned.
Read our Privacy Policy and Cybercrimes Notice.
8) Misinformation & harmful advice
- Don’t share content that could cause real-world harm (e.g., fake medical cures, dangerous “legal advice”).
- Label satire clearly. Provide sources for factual claims when asked.
- We may add context labels, reduce reach, or remove content that presents significant harm risk.
9) Spam, scams, fraud & platform abuse
- No phishing, crypto giveaways, multi-level schemes, or deceptive monetization.
- No malware, links to exploits, or credential harvesting.
- No artificial engagement (engagement pods, fake likes/views, mass automation).
- No evasion of enforcement (ban hopping, duplicate accounts to harass).
10) Impersonation & authenticity
- No pretending to be someone else or an organization without clear parody/satire labeling.
- No deceptive verification/status claims.
- We may reclaim usernames that impersonate, infringe, or mislead.
11) Intellectual property & fair use
Use only content you have rights to. For takedowns/counter-notices, see Copyright/DMCA. Respect trademarks and publicity rights. “Fair use/dealing” varies by country—if unsure, get permission.
12) AI-generated & synthetic media
- Disclose synthetic media that could mislead (e.g., face/voice swaps).
- No deepfakes for harassment, sexualization, or political deception.
- No fabrications of crimes or statements that could cause harm.
- Follow the AI Features rules and applicable laws.
13) Live streams, calls & real-time rooms
- Set correct audience settings; add content warnings when needed.
- Use slow-mode, keyword filters, and moderators for larger rooms.
- Don’t stream illegal acts or enable harassment mobs.
- Disclose if you record or enable replays/VOD.
See the Livestream safety checklist.
14) Chats, DMs & group admins
- Group admins must enforce these rules; remove illegal or abusive content promptly.
- Admins who knowingly allow violations may face account actions (see Cybercrimes Notice).
- Use member approval, invite links, and report tools to keep spaces healthy.
15) Creators, earnings & disclosures
- Follow Creator Earnings for payouts/KYC/AML/taxes.
- Clearly disclose paid promotions and sponsorships (see Ads Policy).
- No harmful incentives (e.g., risky stunts for tips) or fraudulent fundraising.
16) Commercial content, ads & affiliates
- Label sponsored content; keep claims truthful and substantiated.
- Follow restricted/prohibited categories in the Ads Policy.
- Affiliate links must disclose the affiliation and any material connections.
17) Feature limits, age-gates & sensitive media labels
- We may limit features for accounts with policy or safety risks.
- Age-gates and sensitivity labels help protect minors and user choice.
- Some features are unavailable in certain regions due to law or risk.
18) Moderation tools & best practices
- Creators can appoint trusted mods; use word filters and timeouts.
- Prefer de-escalation where possible; apply clear channel rules.
- Document repeat problems to support reports and appeals.
19) Reporting, blocking & safety
- Use in-app report on posts, profiles, DMs, or streams; add details/screenshots.
- Block or mute users; hide replies; restrict DMs to followers or approved contacts.
- For urgent danger, contact local emergency services first.
See Report review workflow for what happens after you report.
20) Enforcement, strikes & appeals
- Actions we may take: warnings, label/reach limits, removals, feature limits, suspensions, termination.
- Strikes: Repeated/severe violations escalate; child safety and egregious harm trigger immediate action.
- Appeals: Where available, you can appeal a removal in-app. Not all actions are appealable (e.g., legal obligations).
Also read Terms — Enforcement and Cybercrimes Notice.
21) Regional expectations & legal notes
6ix is Nigeria-born with global reach. Local laws (e.g., NDPR, GDPR/UK GDPR, CCPA/CPRA, DPDP, LGPD, PIPEDA, AU Privacy) may add protections. Where local law is stricter, those rules prevail for users in that region.
See Privacy Policy and Terms — Regional.
22) Glossary & examples
Harassment: Targeted, repeated, or coordinated abuse. Example: Mass-tagging someone with slurs after they asked you to stop.
Doxxing: Publishing private info to threaten or shame. Example: Posting a home address and urging others to “visit.”
Extremist content: Advocacy or praise for violent extremist groups. Example: Recruiting or fundraising for a designated group.
Synthetic media: AI-made/edited media that could mislead. Example: A fake video of a person making statements they never made.
Dangerous acts: Activities with high risk of injury or harm. Example: Encouraging viewers to ingest non-food substances.
Appendix A — Enforcement matrix (illustrative)
| Category | First violation | Repeat | Severe/Egregious |
|---|---|---|---|
| Harassment | Label or removal; warning | Temp feature limits or suspension | Account suspension up to termination |
| Hate & Extremism | Removal; warning | Suspension; strike | Immediate suspension/termination |
| Child Safety | N/A | N/A | Immediate termination & report to authorities |
| Spam/Scams | Removal; warning | Feature limits; suspension | Termination (fraud/malware) |
| IP Infringement | Removal; notice | Repeat-infringer suspension | Termination after repeated DMCA strikes |
This table is illustrative. Actual actions depend on context, severity, history, and legal obligations.
Appendix B — Livestream safety checklist
- Set the correct audience (all, mature, subscribers) and content warnings.
- Enable slow-mode and word filters; appoint at least 1–2 moderators.
- Disclose if chat or the room is recorded or if VOD will be available.
- Keep crisis resources handy for mental health topics.
- Shut down streams if illegal or dangerous activity appears; report if needed.
Appendix C — Report review workflow (what happens after you report)
- Receipt: We receive your report and queue it by risk level.
- Review: We check context (history, links, repeat behavior, signals).
- Action: We apply the least severe effective action to restore safety.
- Notice: Where allowed, we notify parties of outcomes and appeals.
- Iteration: We adjust filters/rules to prevent recurrences.
26) Updates to these guidelines
We may update these Guidelines as 6ix evolves or laws change. For material changes, we’ll provide notice (e.g., in-app, email, or a banner). Continued use after the effective date means you accept the update.
27) Contact
Safety/abuse: safety@6ixapp.com
Legal: legal@6ixapp.com
IP: copyright@6ixapp.com
For privacy requests, see Privacy — Your Rights.