Features
Solver FEature
Snapper Feature

Automatically flag sensitive reports with AI content moderation

AI content moderation helps you manage sensitive or harmful content at scale automatically, and in seconds.

Automatically flag harmful or sensitive reports

Our intelligent AI system automatically reviews all incoming Snaps, identifying and flagging potentially problematic content from both the image and description included. Our algorithm scans for:

  • Harmful or inappropriate content – harassment, hate speech, explicit material
  • Aggressive criticism of Solvers – targeting specific people or teams within your organisation

When content is flagged, it remains visible to you in the Solver Portal but is automatically hidden from public Snap Feed in-app, protecting both your team and community members.

Flag visibility right where you need it

You'll see content flags in two key places within the Solver Portal:

An illustrated graphic of the app screen with a Snapper character

When you view each snap with

  • A dedicated "AI Moderation" section showing moderation status
  • Updated Public Reactions section clearly indicating when content is hidden from the Feed
An illustrated graphic of the app screen with a Snapper character

On the report list view with

  • A new "Flagged by AI" column for immediate awareness
  • The option to choose when you want to see the “Flagged by AI” column using custom view settings
An illustrated graphic of the app screen with a Snapper character

Move through your day with greater certainty, making informed decisions with clear visibility into content that requires special handling. Apply appropriate discretion when viewing potentially sensitive material, similar to content warnings in other media.

Contact us

Want a simple way to handle sensitive reports more confidently?

Get in touch to preview AI Content Moderation.

Please fill out the form

Thank you! We'll be in touch soon.
Oops! Something went wrong while submitting the form.