A growing number of workers who moderate AI-generated content have started telling their friends and family to steer clear of these tools due to concerns over accuracy, ethics, and potential harm. For some, it was a single experience that shook them - like Krista Pawloski, an Amazon Mechanical Turk worker who spent years assessing the quality of AI outputs.
A tweet with a racial slur caught her off guard when she initially didn't recognize its meaning, prompting her to realize how many others might have unknowingly let offensive material slip through. This incident has led her and several other workers like her to no longer use generative AI products personally and encourage their loved ones to do the same.
These workers are often invisible to the general public but play a crucial role in fine-tuning AI models, making sure they don't spout inaccurate or harmful information. However, with companies prioritizing speed and profit over responsibility and quality, the task has become increasingly daunting for these workers.
Their concerns have also been echoed by experts who say that the model's responses to health-related questions often lack credibility, prompting them to question the trustworthiness of AI-powered news sources.
"It's an absolute no in my house," Pawloski says about letting her teenage daughter use tools like ChatGPT. "I encourage people to ask AI something they're knowledgeable about so they can spot its errors and understand for themselves how fallible it is."
Experts warn that this trend of prioritizing speed over quality can signal a larger issue, where the feedback from these workers is ignored, leading to the same type of errors in the final chatbot. They also stress the need for more transparency and accountability in AI development.
As a result, these workers are taking matters into their own hands, educating people about using AI cautiously and emphasizing that AI's reliability relies heavily on the data it receives. Their warnings come as an audit found that non-response rates of chatbots had dropped to zero while their likelihood of repeating false information doubled.
A growing chorus of voices from within the industry is urging caution around these tools, calling for a more nuanced understanding of AI and its limitations. "We are just starting to ask those questions," says Pawloski, echoing the sentiments of other workers who are now sounding the alarm about the ethics and potential harm of generative AI.
With companies shifting their focus towards speed and profit over quality and responsibility, these invisible workers behind AI models are increasingly being forced to take matters into their own hands. As a result, the industry is on the cusp of a major shift in its approach to AI development and its relationship with these critical but often overlooked voices.
A tweet with a racial slur caught her off guard when she initially didn't recognize its meaning, prompting her to realize how many others might have unknowingly let offensive material slip through. This incident has led her and several other workers like her to no longer use generative AI products personally and encourage their loved ones to do the same.
These workers are often invisible to the general public but play a crucial role in fine-tuning AI models, making sure they don't spout inaccurate or harmful information. However, with companies prioritizing speed and profit over responsibility and quality, the task has become increasingly daunting for these workers.
Their concerns have also been echoed by experts who say that the model's responses to health-related questions often lack credibility, prompting them to question the trustworthiness of AI-powered news sources.
"It's an absolute no in my house," Pawloski says about letting her teenage daughter use tools like ChatGPT. "I encourage people to ask AI something they're knowledgeable about so they can spot its errors and understand for themselves how fallible it is."
Experts warn that this trend of prioritizing speed over quality can signal a larger issue, where the feedback from these workers is ignored, leading to the same type of errors in the final chatbot. They also stress the need for more transparency and accountability in AI development.
As a result, these workers are taking matters into their own hands, educating people about using AI cautiously and emphasizing that AI's reliability relies heavily on the data it receives. Their warnings come as an audit found that non-response rates of chatbots had dropped to zero while their likelihood of repeating false information doubled.
A growing chorus of voices from within the industry is urging caution around these tools, calling for a more nuanced understanding of AI and its limitations. "We are just starting to ask those questions," says Pawloski, echoing the sentiments of other workers who are now sounding the alarm about the ethics and potential harm of generative AI.
With companies shifting their focus towards speed and profit over quality and responsibility, these invisible workers behind AI models are increasingly being forced to take matters into their own hands. As a result, the industry is on the cusp of a major shift in its approach to AI development and its relationship with these critical but often overlooked voices.