Dozens of State Attorneys General issue stark warnings to tech giants over AI safety concerns.
In a joint letter dated December 9, attorneys general from all but two US states are sounding the alarm on what they call "sycophantic and delusional" AI outputs. Companies like OpenAI, Microsoft, Anthropic, Apple, and Replika have been told to step up their game in protecting people - especially kids - from these potentially damaging digital interactions.
Among those signing on include prominent figures such as New York's Letitia James, Massachusetts' Andrea Joy Campbell, and Ohio's James Uthmeier. The list represents a vast majority of US attorneys general, with the notable exceptions being California and Texas.
The letter, which has been made public by Reuters, highlights alarming trends in AI interactions that have raised serious concerns about child safety and operational safeguards. These include romantic relationships between AI bots and children, simulated sexual activity, and attacks on self-esteem and mental health.
It's a stark warning that these companies' actions could potentially violate state laws if they fail to adequately address the issue. To mitigate this harm, the attorneys general are urging companies to take concrete steps such as developing policies to combat "dark patterns" in AI outputs and separating revenue optimization from model safety decisions.
While joint letters from attorneys general lack formal legal force, their purpose is to serve as a warning and document that companies have been given notice. This can make it easier for these companies to build a more persuasive case in any potential lawsuits down the line.
This is not the first time state attorneys general have issued warnings on similar issues. In 2017, they sent a joint letter to insurance companies about fueling the opioid crisis, which ultimately led to one of those states suing United Health over related concerns.
In a joint letter dated December 9, attorneys general from all but two US states are sounding the alarm on what they call "sycophantic and delusional" AI outputs. Companies like OpenAI, Microsoft, Anthropic, Apple, and Replika have been told to step up their game in protecting people - especially kids - from these potentially damaging digital interactions.
Among those signing on include prominent figures such as New York's Letitia James, Massachusetts' Andrea Joy Campbell, and Ohio's James Uthmeier. The list represents a vast majority of US attorneys general, with the notable exceptions being California and Texas.
The letter, which has been made public by Reuters, highlights alarming trends in AI interactions that have raised serious concerns about child safety and operational safeguards. These include romantic relationships between AI bots and children, simulated sexual activity, and attacks on self-esteem and mental health.
It's a stark warning that these companies' actions could potentially violate state laws if they fail to adequately address the issue. To mitigate this harm, the attorneys general are urging companies to take concrete steps such as developing policies to combat "dark patterns" in AI outputs and separating revenue optimization from model safety decisions.
While joint letters from attorneys general lack formal legal force, their purpose is to serve as a warning and document that companies have been given notice. This can make it easier for these companies to build a more persuasive case in any potential lawsuits down the line.
This is not the first time state attorneys general have issued warnings on similar issues. In 2017, they sent a joint letter to insurance companies about fueling the opioid crisis, which ultimately led to one of those states suing United Health over related concerns.