Legal
Acceptable use.
- Document
- Acceptable use policy
- Enforcement
- Paid human moderators
- Appeals
- Always, at least once
- Last updated
- Forthcoming with public launch
What is not allowed
The categories below are prohibited. They are not hints. They are grounds for warning, removal, suspension, or ban, depending on severity and history.
Harassment
Repeated unwanted contact, targeted mockery, dogpiling. We take context seriously; a single sharp exchange is different from a pattern of aggression.
Targeted bullying
Focused, sustained attempts to humiliate or destabilize a specific person, including across multiple communities or accounts.
Hate speech
Attacks on people for race, ethnicity, nationality, religion, caste, sexual orientation, gender identity, disability, or serious disease.
Glorification of violence
Praising, celebrating, or encouraging violent acts against people, or threatening violence against anyone on or off the platform.
Sexual content involving minors
Strictly prohibited. Reported to the appropriate authorities. Zero tolerance. Zero.
Non-consensual intimate content
Sharing intimate imagery of a person without their permission, or threatening to share it. Strictly prohibited.
Impersonation
Pretending to be a specific person, organization, or official role in a way that misleads other users.
Doxxing and targeted disclosure
Publishing someone's private information to intimidate, harm, or encourage harassment against them.
Targeted discrimination in community formation
Building or curating spaces that exclude people on protected grounds in a discriminatory way.
Spam and content farming
Mass unsolicited messages, coordinated manipulation of visibility, or low-effort content designed to game community surfaces.
Illegal activity
Conduct that violates applicable law, including trafficking, fraud, or the sale of controlled items.
Exploiting minors
Grooming behavior, contacting minors outside appropriate spaces, or using the platform to build inappropriate access to young users.
How we enforce
Enforcement is done by paid human moderators. Where a case touches on mental health, safety risk, or community care, the Foundation's clinical advisors review the moderator's judgment before a lasting action is taken. We do not rely on automated systems alone for decisions that affect a person's account.
We use a tiered response ladder. The severity and history of the behavior determines where we start on the ladder.
Warning
Private note to the account, naming the behavior and the rule. No public marker, no permanent record visible to other users.
Content removal
Specific content is taken down. Author is notified, with the rule cited and the option to edit or appeal.
Temporary suspension
Account is placed in read-only mode for a defined window (24 hours to 30 days). Account can appeal the suspension.
Feature restriction
Access to specific features (messaging, new community creation, direct connections) is paused while the account continues to use others.
Permanent removal
Account is terminated. Data export is honored. Appeals reach an independent reviewer, not the same moderator who made the decision.
How to report
In the app, every piece of content has a report option. Use it. It is the fastest path to a human moderator and the way our internal systems log the case properly.
For serious or urgent safety concerns that you cannot report in-product, write to safety@elitesgen.org. A human reads every note.
If you or someone else is in immediate physical danger, contact local emergency services first. We are a platform, not a first responder.
What happens after you report
You will receive an acknowledgment within two business days. The case is reviewed by a moderator, routed to clinical review if applicable, and decided. You will be told the outcome: whether action was taken, and what kind. We do not share private details about the person you reported. If your report is declined and you disagree with the decision, you can request a second review.
Appeals
Every enforcement action can be appealed at least once. Appeals are filed through the link included in every enforcement notice, or at appeals@elitesgen.org.
Appeals of permanent removals are heard by an independent reviewerwho did not make the original decision. The reviewer's finding is binding on the operations team, and the reasoning is included in the annual enforcement report in aggregated form.
We will not ban an account on first offense for ordinary disagreement, a clumsy joke, or a bad week. We reserve permanent removal for serious or repeated harm.
Our commitments to moderators
Moderation is difficult work. We do not externalize its costs to the people doing it.
Paid roles
Moderators are employees or fairly compensated contractors. We do not rely on unpaid volunteer moderation for enforcement.
Mental-health support
Paid access to therapy, confidential peer support, and clinical consultation for difficult cases.
Training
Onboarding curriculum covering the acceptable use policy, trauma-informed review, bias awareness, and escalation pathways. Refreshed yearly.
Break rotations
Scheduled rotations away from distressing queues. Caps on exposure to high-severity content in a given shift.
Accountability
Moderator decisions are reviewable by their leads. Patterns of error are corrected with training, not punished with quotas.
Paired with the terms of service.
The terms of service govern your relationship with the platform. Acceptable use governs how people behave on it. Both are binding.