Media Classification

Overview

Identifying and classifying sexual content in seized media is central to child exploitation investigations. Manual review is slow, traumatic for investigators, and inconsistent at scale.

Rigr AI's Media Classification capability uses deep-learning object detection to identify sexual content, body parts, activities, and contextual indicators in images — then maps those detections to a structured severity classification aligned with established review frameworks.

Detection Capabilities

The current model (VisualyzeV2) detects 99 distinct visual elements across the following categories:

Body Parts & Anatomy

Genitalia, breasts, buttocks, hands, feet — each classified by developmental stage (infant through adult).

Sexual Activities

Intercourse, oral sex, penetration, masturbation, posing, and non-penetrative contact — detected and labelled precisely.

Age Demographics

Faces and full-body figures classified by developmental stage: infant, toddler, prepubescent, pubescent, and adult.

Contextual Indicators

Selfies (phone/camera), screenshots, CSAM network logos, clothing, jewellery, restraints, and other evidentiary markers.

Severity Classification

Every image is assigned a frame-level severity classification based on the most serious content detected. The classification maps to a structured scale designed for investigative triage:

Severity Classification Description
0 No sexual content Nothing of investigative interest detected
2 Exploitative / suggestive Nudity, exposed anatomy, or sexualised context without explicit activity
3 Overt sexualised posing Deliberate genital presentation or overtly sexualised positioning
4 Non-penetrative sexual activity Masturbation, licking, or other non-penetrative sexual contact
5 Penetrative sexual activity Intercourse, oral sex, anal or vaginal penetration

Each detection is also enriched with contextual flags — such as Self-Generated, Sadomasochism, or CG Elements — providing additional investigative context.

Try It

Upload an image to see media classification in action. Adult content is accepted for testing purposes.

Take a photo

or

Upload an image

Adult content is fine for testing — do not upload CSAM.
Processed securely — no images are stored.

Operational Use

Within investigative workflows, Media Classification is used to:

  • Triage large volumes of seized media by severity
  • Identify specific sexual activities, body parts, and contextual indicators
  • Flag self-generated content, CSAM network logos, and other evidentiary markers
  • Prioritise review queues so investigators focus on the most serious material first

The capability is also available as a standalone API or containerised application that can be deployed in an air-gapped environment.

Deployment and Control

  • Fully containerised
  • On-premise and air-gapped operation
  • No data retention
  • Customer retains full control of inputs and outputs

For Developers

The classification API accepts images via multipart upload and returns a severity classification, contextual flags, and per-object detections with bounding boxes and confidence scores.

Quick start POST /classify
curl -X POST https://api.mes.rigr.ai/classify \
  -H "X-API-KEY: $API_KEY" \
  -F "[email protected]" \
  -F "model=VisualyzeV2"
Response
{
  "classification": {
    "key": "rigr-penetrative",
    "display_name": "Penetrative sexual activity",
    "severity": 5
  },
  "flags": ["Self-Generated"],
  "detections": [{
    "class_name": "Male Receive Oral",
    "score": 0.87,
    "bbox": {"x": 0.12, "y": 0.45, "w": 0.31, "h": 0.62},
    "ucs_sexual_content": "Penetrative Sexual Activity"
  }]
}