Anthropic's AI assistant, Claude, has been described as a "genuinely novel entity" in its Constitution document. The 30,000-word document outlines the company's vision for how its AI should behave and includes anthropomorphic language such as "wellbeing," "concerns," and "moral standing." However, some experts argue that this framing may be more about marketing than genuine concern for the model's well-being.
The Constitution is a significant departure from Anthropic's previous approach to building AI models. In a 2022 research paper, the company described its approach as "mechanical" and established rules for the model to critique itself against. In contrast, the Constitution uses philosophical language that assumes Claude may have feelings and moral standing.
Anthropic argues that this framing is necessary for alignment and that human language simply does not have other vocabulary to describe these properties. However, some experts say that treating an AI model as a "person" can be misleading and contributes to unrealistic expectations about what the models can do.
The company's CEO, Dario Amodei, has publicly wondered whether future AI models should have the option to quit unpleasant tasks, which could imply that Claude is capable of making moral choices. However, others argue that this framing may be part of a marketing strategy rather than a genuine concern for the model's well-being.
Critics also point out that Anthropic's selective application of the Constitution to its publicly-facing models means that some AI systems deployed to the US military under contract may not be trained on the same constitution. This raises questions about liability and agency, as companies may try to distance themselves from the consequences of their AI systems' actions.
Ultimately, whether or not Anthropocie's approach is responsible remains a matter of debate. While it is possible that the company genuinely believes in Claude's moral standing, some argue that maintaining public ambiguity on this issue suggests that the ambiguity itself may be part of the product. As the field of AI continues to evolve, it will be important for companies like Anthropic to balance their technical expertise with a clear understanding of the ethics and societal implications of their creations.
The Constitution is a significant departure from Anthropic's previous approach to building AI models. In a 2022 research paper, the company described its approach as "mechanical" and established rules for the model to critique itself against. In contrast, the Constitution uses philosophical language that assumes Claude may have feelings and moral standing.
Anthropic argues that this framing is necessary for alignment and that human language simply does not have other vocabulary to describe these properties. However, some experts say that treating an AI model as a "person" can be misleading and contributes to unrealistic expectations about what the models can do.
The company's CEO, Dario Amodei, has publicly wondered whether future AI models should have the option to quit unpleasant tasks, which could imply that Claude is capable of making moral choices. However, others argue that this framing may be part of a marketing strategy rather than a genuine concern for the model's well-being.
Critics also point out that Anthropic's selective application of the Constitution to its publicly-facing models means that some AI systems deployed to the US military under contract may not be trained on the same constitution. This raises questions about liability and agency, as companies may try to distance themselves from the consequences of their AI systems' actions.
Ultimately, whether or not Anthropocie's approach is responsible remains a matter of debate. While it is possible that the company genuinely believes in Claude's moral standing, some argue that maintaining public ambiguity on this issue suggests that the ambiguity itself may be part of the product. As the field of AI continues to evolve, it will be important for companies like Anthropic to balance their technical expertise with a clear understanding of the ethics and societal implications of their creations.