Anthropic’s Transparency Hub:
Platform Security

A look at Anthropic's key processes, programs, and practices for responsible AI development.

Updated Feb 27, 2025

Platform Security

We are sharing more detail on our Usage Policy, some enforcement data, and how we handle legal requests to enable meaningful public dialogue about AI platform safety.

Banned Accounts

  • 1.1M

    Banned Accounts

    July - December 2024

Anthropic’s Safeguards Team designs and implements detections and monitoring to enforce our Usage Policy. If we learn that a user has violated our Usage Policy, we may take enforcement actions such as warning, suspending, or terminating their access to our products and services.

  • 36k

    Appeals

    July - December 2024

  • 1.5k

    Appeal Overturns

    July - December 2024

Banned users may file appeals to request a review of our decision to ban their account.

Child Safety Reporting

  • 421

    Total pieces of content reported to NCMEC

    July - December 2024

Anthropic is committed to combating child exploitation through prevention, detection and reporting. On our first-party services, we employ hash-matching technology to detect and report known CSAM to NCMEC that users may upload.

Legal Requests

Anthropic processes data requests from law enforcement agencies and governments in accordance with applicable laws while protecting user privacy. These requests may include content information, non-content records, or emergency disclosure requests.

For more information, see our full report here:

January - June 2024 Government Requests for Data