Usage policy update
Today, we’re sharing some updates to our Usage Policy that reflect the growing capabilities and evolving usage of our products. Our Usage Policy serves as a framework for how Claude should and shouldn’t be used, providing clear guidance for everyone who uses Anthropic’s products.
In this update, our goal is to provide greater clarity and detail on our Policy based on user feedback, product changes, regulatory developments, and our enforcement priorities. These changes will take effect on September 15, 2025.
Below is a summary of some of the changes, and you can view the new Usage Policy here.
Addressing cybersecurity and agentic use
Over the past year, we’ve seen rapid advances in agentic capabilities. We've released our own agentic tools like Claude Code and Computer Use, and our models power many of the world's leading coding agents.
These powerful capabilities introduce new risks, including potential for scaled abuse, malware creation, and cyber attacks, as shared in our first threat intelligence report, Detecting and Countering Malicious Uses of Claude: March 2025.
To address these risks, we've added a section to our Usage Policy outlining the malicious computer, network, and infrastructure compromise activities that are prohibited by Anthropic. We continue to support use cases that strengthen cybersecurity, such as discovering vulnerabilities with the system owner's consent.
We’ve also published a new article to our Help Center on how our Usage Policy applies to agentic use more broadly. This supplementary guidance provides concrete examples of prohibited activities in agentic contexts, and is not meant to replace or supersede our Usage Policy.
Revisiting broad restrictions on political content
Our Usage Policy has historically contained broad prohibitions on all types of lobbying or campaign content. We believed this stance was appropriate given the unknown risks of AI-generated content on influencing democratic processes, and these are still prominent risks we take seriously.
We’ve heard from users that this blanket approach also limited legitimate use of Claude for policy research, civic education, and political writing. We're now tailoring our restrictions to specifically prohibit use cases that are deceptive or disruptive to democratic processes, or involve voter and campaign targeting. This approach enables legitimate political discourse and research while prohibiting activity that is misleading or invasive.
Updating our language on law enforcement use
Our previous Usage Policy language on law enforcement included various exceptions for back-office tools and analytical applications, which occasionally made it difficult to understand which use cases were permitted.
To address this, we've updated our policy language to be clearer and more straightforward. This update does not change what is allowed or prohibited – it now communicates our existing stance more clearly. We continue to restrict the same areas of concern, including surveillance, tracking, profiling, and biometric monitoring, while maintaining support for appropriate back-office and analytical use cases that were already permitted.
Requirements for high-risk consumer-facing use cases
Our High-Risk Use Case Requirements apply to use cases that have public welfare and social equity implications, including legal, financial, and employment-related use of Claude. These cases require additional safeguards such as human-in-the-loop oversight and AI disclosure.
As Claude usage has expanded across enterprise use cases, we’re clarifying that these requirements apply specifically when models’ outputs are consumer-facing, and not for business to business interactions.
Looking ahead
We view our Usage Policy as a living document, evolving as AI risks themselves evolve. We will continue to work within Anthropic and with external policymakers, subject matter experts, and civil society to evaluate our policies on an ongoing basis.