Amazon discovered a 'high volume' of CSAM in its AI training data but isn't saying where it came from

I'm really concerned about this 🤕, like, how can a big company like Amazon just sweep all this nasty stuff under the rug? I mean, we're talking about millions of reports of child abuse material and they still haven't told us where it came from? That's not good enough for me. Companies need to be more transparent about their methods and make sure they're doing everything they can to stop CSAM from spreading. It's like, AI is supposed to help us, not hurt us 🤖. And what's with all the excuses about being too cautious? That just sounds like a cover-up to me. We need stricter regulations so companies can't just hide behind "proactive safeguards" 💔
 
omg I'm so worried about this 🤯! I mean, who exactly is responsible for this massive amount of child abuse material in their training data? It's like they're just sweeping it under the rug and hoping no one notices 🚮. And Amazon's excuse that they got it from external sources doesn't make sense - if that were true wouldn't they be more transparent about it? I think experts are right, this is an example of how overly cautious companies can lead to a lot of reported content getting lost in the noise.

We need to hold these companies accountable and demand more transparency! Like, what's going on behind the scenes? How much data is being used to train their AI services? It's not like they're just going to magically fix this issue without anyone questioning it 🔍. This whole thing makes me super uncomfortable 😕
 
omg this is so worrying 🤕, like i thought amazon was a responsible company but now im not so sure... how did they even get that much CSAM in their training data? didnt they have any vetting process in place? and what about the 99% of reports coming from them? that's just crazy 🤯... isn't it weird that they're being super secretive about where its coming from? wouldn't that make it worse, like if people knew how widespread the problem was?

i mean i get that they wanna minimize false positives but cant they find a balance or something? transparency and accountability should be key here! 🤝 what kind of safeguards are actually in place to prevent this stuff from getting into their AI systems? are they doing enough to protect users? these questions just keep popping up in my head...
 
🤔 i mean, its crazy to think about how much CSAM was found in amazon's training data... like what even is the likelihood that a major company like amazon would just get this stuff from external sources without anyone knowing? 🙄 its also wild that they're being so secretive about it. i get that they don't want to misclassify any false positives, but is it really that hard for them to provide some basic info on where the data came from? 💻 and yeah, this just makes me think we need way more regulation around AI development and deployment... its too easy for bad stuff to slip through when no one's keeping tabs 👀
 
Back
Top