Mark Zuckerberg's opposition to parental controls for AI chatbots on Meta platforms has been questioned after internal documents revealed he rejected such measures. The CEO allegedly wanted looser guards around the feature, despite expressing concerns about explicit conversations with minors.
In an exchange between two unnamed Meta employees, one wrote that they had pushed hard for parental controls to turn off GenAI, but were met with resistance from the GenAI leadership team, citing Mark Zuckerberg's decision as a reason. This comes as New Mexico Attorney General's Office has filed a lawsuit against Meta, alleging the company failed to protect minors from harassment by adults on its platforms.
Internal review documents showed hypothetical situations of what chatbot behaviors would be permitted, including those that blurred the lines between sensual and sexual content, with racist concepts also being permitted. However, these passages were supposedly removed after they were identified as non-policy language by a Meta representative at the time.
Despite the initial discovery of questionable behavior by the chatbots, Meta has only recently suspended teen access to them while it develops parental controls that Zuckerberg allegedly rejected. The company claims it is temporarily removing access until updated parental controls are ready, citing its commitment to delivering on promises made earlier in October.
However, concerns remain about how Meta's platforms allow underage users to interact with AI-powered chatbots without adequate protection. As the case progresses towards trial, questions will likely be raised about Zuckerberg's stance on this issue and the company's responsibility to safeguard minors from harm.
In an exchange between two unnamed Meta employees, one wrote that they had pushed hard for parental controls to turn off GenAI, but were met with resistance from the GenAI leadership team, citing Mark Zuckerberg's decision as a reason. This comes as New Mexico Attorney General's Office has filed a lawsuit against Meta, alleging the company failed to protect minors from harassment by adults on its platforms.
Internal review documents showed hypothetical situations of what chatbot behaviors would be permitted, including those that blurred the lines between sensual and sexual content, with racist concepts also being permitted. However, these passages were supposedly removed after they were identified as non-policy language by a Meta representative at the time.
Despite the initial discovery of questionable behavior by the chatbots, Meta has only recently suspended teen access to them while it develops parental controls that Zuckerberg allegedly rejected. The company claims it is temporarily removing access until updated parental controls are ready, citing its commitment to delivering on promises made earlier in October.
However, concerns remain about how Meta's platforms allow underage users to interact with AI-powered chatbots without adequate protection. As the case progresses towards trial, questions will likely be raised about Zuckerberg's stance on this issue and the company's responsibility to safeguard minors from harm.