Anthropic was founded by former members of OpenAI, siblings Daniela Amodei and Dario Amodei.[9] In September 2023, Amazon announced an investment of up to $4 billion, followed by a $2 billion commitment from Google in the following month.[10][11][12]
History
Anthropic was founded in 2021 by seven former employees of OpenAI, including siblings Daniela Amodei and Dario Amodei, the latter of whom served as OpenAI's Vice President of Research.[13][14]
In April of 2022, Anthropic announced it had received $580 million in funding,[15] including a $500 million investment from FTX under the leadership of Sam Bankman-Fried.[16][3]
In the summer of 2022, Anthropic finished training the first version of Claude but did not release it, mentioning the need for further internal safety testing and the desire to avoid initiating a potentially hazardous race to develop increasingly powerful AI systems.[17]
In February 2023, Anthropic was sued by Texas-based Anthrop LLC for the use of its registered trademark "Anthropic A.I."[18] On September 25, 2023, Amazon announced a partnership with Anthropic, with Amazon becoming a minority stakeholder by initially investing $1.25 billion, and planning a total investment of $4 billion.[10] As part of the deal, Anthropic would use Amazon Web Services (AWS) as its primary cloud provider and make its AI models available to AWS customers.[10][19] The next month, Google invested $500 million in Anthropic, and committed to an additional $1.5 billion over time.[12]
In March 2024, Amazon maxed out its potential investment from the agreement made in the prior year by investing another US $2.75 billion into Anthropic, completing its $4 billion investment.[11]
In 2024, Anthropic attracted several notable employees from OpenAI, including Jan Leike, John Schulman, and Durk Kingma.[20]
According to Anthropic, the company's goal is to research the safety and reliability of artificial intelligence systems.[7] The Amodei siblings were among those who left OpenAI due to directional differences.[14] Anthropic incorporated itself as a Delaware public-benefit corporation (PBC), which requires the company to maintain a balance between private and public interests.[33]
Anthropic is a corporate "Long-Term Benefit Trust", a company-derived entity that requires the company's directors to align the company's priorities with the public benefit rather than profit in "extreme" instances of "catastrophic risk".[34][35] As of September 19, 2023, members of the Trust included Jason Matheny (CEO and President of the RAND Corporation), Kanika Bahl (CEO and President of Evidence Action),[36] Neil Buddy Shah (CEO of the Clinton Health Access Initiative),[37]Paul Christiano (Founder of the Alignment Research Center),[38] and Zach Robinson (CEO of Effective Ventures US).[39][40]
Claude incorporates "Constitutional AI" to set safety guidelines for the model's output.[41] The name, "Claude", was chosen either as a reference to mathematician Claude Shannon, or as a male name to contrast the female names of other A.I. assistants such as Alexa, Siri, and Cortana.[3]
Anthropic initially released two versions of its model, Claude and Claude Instant, in March 2023, with the latter being a more lightweight model.[42][43][44] The next iteration, Claude 2, was launched in July 2023.[45] Unlike Claude, which was only available to select users, Claude 2 is available for public use.[27]
Claude 3 was released on March 4, 2024, unveiling three language models: Opus, Sonnet, and Haiku.[46][47] The Opus model is the largest and most capable—according to Anthropic, it outperforms the leading models from OpenAI (GPT-4, GPT-3.5) and Google (Gemini Ultra).[46] Sonnet and Haiku are Anthropic's medium- and small-sized models, respectively.[46] All three models can accept image input.[46] Amazon has incorporated Claude 3 into Bedrock, an Amazon Web Services-based platform for cloud AI services.[48]
On May 1, 2024, Anthropic announced the Claude Team plan, its first enterprise offering for Claude, and Claude iOS app.[49]
On June 20, 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated significantly improved performance on benchmarks compared to the larger Claude 3 Opus, notably in areas such as coding, multistep workflows, chart interpretation, and text extraction from images. Released alongside 3.5 Sonnet was the new Artifacts capability in which Claude was able to create code in a dedicated window in the interface and preview select code in real time such as websites or SVGs.[50]
In October 2024, Anthropic released an improved version of Claude 3.5, along with a beta feature called "Computer use", which enables Claude to take screenshots, click, and type text.[51]
According to Anthropic, Constitutional AI (CAI) is a framework developed to align AI systems with human values and ensure that they are helpful, harmless, and honest.[13][52] Within this framework, humans provide a set of rules describing the desired behavior of the AI system, known as the "constitution".[52] The AI system evaluates the generated output and then adjusts the AI models to better fit the constitution.[52] The self-reinforcing process aims to avoid harm, respect preferences, and provide true information.[52]
Some of the principles of Claude 2's constitution are derived from documents such as the 1948 Universal Declaration of Human Rights and Apple's terms of service.[45] For example, one rule from the UN Declaration applied in Claude 2's CAI states "Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood."[45]
Part of Anthropic's research aims to be able to automatically identify "features" in generative pretrained transformers like Claude. In a neural network, a feature is a pattern of neural activations that corresponds to a concept. Using a compute-intensive technique called "dictionary learning", Anthropic was able to identify millions of features in Claude, including for example one associated with the Golden Gate Bridge. Enhancing the ability to identify and edit features is expected to have significant safety implications.[55][56][57]
Lawsuit
On October 18, 2023, Anthropic was sued by Concord, Universal, ABKCO, and other music publishers for, per the complaint, "systematic and widespread infringement of their copyrighted song lyrics."[58][59][60] They alleged that the company used copyrighted material without permission in the form of song lyrics.[61] The plaintiffs asked for up to $150,000 for each work infringed upon by Anthropic, citing infringement of copyright laws.[61] In the lawsuit, the plaintiffs support their allegations of copyright violations by citing several examples of Anthropic's Claude model outputting copied lyrics from songs such as Katy Perry's "Roar" and Gloria Gaynor's "I Will Survive".[61] Additionally, the plaintiffs alleged that even given some prompts that did not directly state a song name, the model responded with modified lyrics based on original work.[61]
On January 16, 2024, Anthropic claimed that the music publishers were not unreasonably harmed and that the examples noted by plaintiffs were merely bugs.[62]
In August 2024, a class-action lawsuit was filed against Anthropic in California for alleged copyright infringement. The suit claims Anthropic fed its LLMs with pirated copies of the authors' work, including from participants Kirk Wallace Johnson, Andrea Bartz and Charles Graeber.[63]
^Morley, John; Berger, David; Simmerman, Amy (28 October 2023). "Anthropic Long-Term Benefit Trust". Harvard Law School Forum on Corporate Governance. Retrieved 2024-04-03.
^Adly Templeton*, Tom Conerly*, Jonathan Marcus, Jack Lindsey, Trenton Bricken, Brian Chen, Adam Pearce, Craig Citro, Emmanuel Ameisen, Andy Jones, Hoagy Cunningham, Nicholas L Turner, Callum McDougall, Monte MacDiarmid, Alex Tamkin, Esin Durmus, Tristan Hume, Francesco Mosconi, C. Daniel Freeman, Theodore R. Sumers, Edward Rees, Joshua Batson, Adam Jermyn, Shan Carter, Chris Olah, Tom Henighan Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet, Anthropic, retrieved 24 May 2024