top of page

Reporting & Moderation Policy

Purpose

To provide every member with a safe, respectful platform and a documented, accessible method of reporting abuse, harm, or any violations of community guidelines, and to ensure urgent cases are promptly addressed and all actions are logged for legal compliance and review.

Last updated: Feb 2026

 

How members report content

​

Members can report content in any of these ways:

  • Use the platform “Report Content” option in the Community Hub, selecting the closest reason (e.g., harassment, misinformation, confidentiality, self-harm risk).

  • Message a Moderator or Admin directly if reporting feels safer, or if the content involves personal privacy.

  • Tag a moderator in the thread (only when appropriate) if the situation is escalating and you want visible support in the moment.

 

What to include in a report (if possible):

  • Link/screenshot reference to the post (inside the platform)

  • What feels unsafe or concerning

  • Whether you want a public intervention or private handling

 

What moderators can do

 

To protect safety and community standards, moderators may take the following actions:

 

Content actions

  • Hide content temporarily while reviewing (used when context is unclear or to prevent pile-ons).

  • Remove content that breaks guidelines (e.g., harassment, confidentiality breaches, medical misinformation presented as fact, promotion/solicitation).

  • Edit requests (ask the author to remove identifying information or graphic detail; if not updated, moderators may remove to protect the community).

 

Member actions

  • Gentle reminder (public or private) when behaviour needs a light redirect (e.g., “you should” advice-giving).

  • Formal warning for repeated guideline breaches.

  • Temporary mute/suspension when behaviour harms safety or disrupts the community.

  • Removal/ban in cases of serious or repeated harm (harassment, hate speech, doxxing, persistent boundary violations, or deliberate safety violations).

Moderators aim to use the least intense action that restores safety, but will act quickly where needed.

 

Typical response times

 

We do our best to respond as quickly as we can, but we’re a small team and cannot guarantee fixed timelines.

In general:

  • Urgent safety concerns (risk of harm, threats, self-harm content, doxxing) are prioritised as soon as seen.

  • Harassment, confidentiality, and misinformation are typically reviewed as soon as possible, usually within 1–2 days.

  • General guideline issues (misplaced posts, tone reminders, minor disputes) may take a little longer depending on volume and moderator availability.

If something feels urgent, please report it and message a moderator.

 

Safety concerns & confidentiality limits

​

Where I’m At Community is a peer support + education space. We are not able to provide crisis counselling, therapy, or emergency response.

​

If someone appears at risk of harm

  • Moderators will respond with a safety message encouraging the person to contact emergency services or local crisis support.

  • We may hide or limit content that could be harmful to others (e.g., graphic detail, active threats).

  • We may privately message the member with crisis signposting (where platform features allow).

  • We document the incident internally to support consistent moderation.

 

Confidentiality in the community

  • Members are expected to keep what’s shared here private (no screenshots, no sharing stories outside the space).

  • However, confidentiality is not absolute. If we believe there is a serious and immediate safety risk, we may take protective action within the platform (e.g., escalation, suspension, removal of harmful content).

  • We do not share personal member information with employers/partners. Any partnership reporting is aggregated and anonymised.

 

Bottom line

We protect privacy and psychological safety as much as possible, while prioritising safety when risk is high.

 

 

What happens after you report (step-by-step)

​

  1. You submit a report
    You report a post/comment (or message a moderator). We receive an alert with a link to the content.

  2. A moderator reviews the content
    A moderator checks the post/comment and surrounding context (thread history, tone, possible safety risk).

  3. Immediate safety steps if needed
    If the content could cause harm (e.g., threats, self-harm risk, doxxing, graphic detail, harassment), we may hide it temporarily while we review and act.

  4. We decide the right action (based on guidelines)
    Depending on what’s needed, we may:

    • leave it as-is

    • add a gentle reminder/redirect

    • ask the author to edit (privacy/graphic detail)

    • remove the content

    • issue a warning or temporary mute/suspension

    • escalate to the admin/lead moderator for serious or repeated issues

  5. We may contact you (or the member) privately
    If it helps, we might message you to clarify what happened or to confirm you feel safe. We may also message the person who posted to explain the action and next steps.

  6. We document serious incidents
    For safety-related or repeated issues, we log what happened and what action was taken so moderation stays consistent and fair.

  7. You’re still supported
    If the report involves distressing content, you’re welcome to message the moderators. If it’s a high-risk safety issue, we’ll include crisis signposting and encourage professional support.

 

Important note: To protect privacy, we may not be able to share every detail of actions taken on another member’s account, but we will act to keep the community safe.

bottom of page