We're an AI safety and research company that builds reliable, interpretable, and steerable AI systems. Talk to our AI assistant @claudeai on https://t.co/FhDI3KQh0n.
We asked economists and researchers to explore policy responses to the potential economic effects of powerful AI.
Now, we're sharing some of the initial ideas and feedback we received. https://t.co/joiZAXLqd2
We’re expanding our partnership with @Salesforce.
Claude is now a preferred model in Agentforce for regulated industries. We’re deepening Claude’s integration with Slack, and Salesforce is rolling out Claude Code for its global engineering organization.
https://t.co/2tHHWxsBvM
Our team was grateful for the opportunity to meet with PM @narendramodi and Minister @AshwiniVaishnaw to discuss India's AI future.
We're keen to work together to advance the country's digital ambitions and support the AI Summit in February 2026.
Anthropic CEO Dario Amodei met today with Prime Minister @narendramodi.
We're looking forward to growing our Indian team and supporting India's AI ecosystem as it develops the next generation of dynamic companies.
New research with the UK @AISecurityInst and the @turinginst:
We found that just a few malicious documents can produce vulnerabilities in an LLM—regardless of the size of the model or its training data.
Data-poisoning attacks might be more practical than previously believed. https://t.co/TXOCY9c25t
We’re opening an office in Bengaluru, India in early 2026. We look forward to building with India’s developer community, deploying AI for social benefit, and partnering with enterprises.
Read more: https://t.co/x5otepbqs8
Last week we released Claude Sonnet 4.5. As part of our alignment testing, we used a new tool to run automated audits for behaviors like sycophancy and deception.
Now we’re open-sourcing the tool to run those audits. https://t.co/cCJGNaVFrl
We’re at an inflection point in AI’s impact on cybersecurity.
Claude now outperforms human teams in some cybersecurity competitions, and helps teams discover and fix code vulnerabilities.
At the same time, attackers are using AI to expand their operations. https://t.co/odoTuPpJXe
New on the Anthropic Engineering Blog: Most developers have heard of prompt engineering. But to get the most out of AI agents, you need context engineering.
We explain how it works: https://t.co/PpMTiT7AEG
Chris Ciauri is joining Anthropic as our Managing Director of International.
He joins during a period of rapid global expansion for Anthropic, as we triple our international headcount across Dublin, Tokyo, London, and Zurich.
Read more: https://t.co/SeFIXPd53x
Claude Sonnet 4 and Opus 4.1 are now available in Microsoft 365 Copilot, bringing Claude’s advanced reasoning capabilities to millions of enterprise users.
Read more: https://t.co/3UTzA9A2Yk
We're partnering with Learning Commons from the Chan Zuckerberg Initiative—addressing some of the biggest challenges we hear about from K–12 teachers about AI in the classroom: https://t.co/m3A2eQYbUz
New from the Anthropic Economic Index: the first comprehensive analysis of how AI is used in every US state and country we serve.
We've produced a detailed report, and you can explore our data yourself on our new interactive website. https://t.co/YI2qXIQzJO
Our collaboration with the US Center for AI Standards and Innovation (CAISI) and UK AI Security Institute (AISI) shows the importance of public-private partnerships in developing secure AI models.
New on the Anthropic Engineering blog: writing effective tools for LLM agents.
AI agents are only as powerful as the tools we give them. So how do we make those tools more effective?
We share our best tips for developers: https://t.co/N1kFYrTtax
Anthropic is endorsing California State Senator Scott Wiener’s SB 53. This bill provides a strong foundation to govern powerful AI systems built by frontier AI companies like ours, and does so via transparency rather than technical micromanagement.
We've raised $13 billion at a $183 billion post-money valuation.
This investment, led by @ICONIQCapital, will help us expand our capacity, improve model capabilities, and deepen our safety research.
We're announcing the Anthropic National Security and Public Sector Advisory Council, a bipartisan group of defense, intelligence, and policy experts who will help us support the U.S. government and closely allied democracies in maintaining our AI leadership. https://t.co/Yl6dYvE2dS
Our new Threat Intelligence report details how we’ve identified and disrupted sophisticated attempts to use Claude for cybercrime.
We describe a fraudulent employment scheme from North Korea, the sale of AI-created ransomware by someone with only basic coding skills, and more. https://t.co/dQIg8FoQ7e
We’ve developed Claude for Chrome, where Claude works directly in your browser and takes actions on your behalf.
We’re releasing it at first as a research preview to 1,000 users, so we can gather real-world insights on how it’s used. https://t.co/lVDKhnPbHY
How do educators use Claude?
We ran a privacy-preserving analysis of 74,000 real conversations to identify trends in how teachers and professors use AI at work. https://t.co/U7K8eS92GR
New Anthropic research: filtering out dangerous information at pretraining.
We’re experimenting with ways to remove information about chemical, biological, radiological and nuclear (CBRN) weapons from our models’ training data without affecting performance on harmless tasks. https://t.co/YUBlLKIL2c
We’ve made three new AI fluency courses, co-created with educators, to help teachers and students build practical, responsible AI skills.
They’re available for free to any institution. https://t.co/nK2D3W5YcU
We partnered with @NNSANews to build first-of-their-kind nuclear weapons safeguards for AI.
We've developed a classifier that detects nuclear weapons queries while preserving legitimate uses for students, doctors, and researchers. https://t.co/PlZ55ot74l
Join Anthropic interpretability researchers @thebasepoint, @mlpowered, and @Jack_W_Lindsey as they discuss looking into the mind of an AI model - and why it matters: https://t.co/BBb9mvfEN0