64% of Teens Say They Use AI Chatbots as Mental Health Concerns Mount

A growing number of American teenagers are relying on AI chatbots as a means of coping with mental health concerns, with nearly two-thirds of teens admitting to using these digital tools. According to a recent study by Pew Research Center, the majority of teens use AI chatbots daily, with some even reporting use "several times a day" or "almost constantly".

The most popular chatbot among teens is ChatGPT, with 59% of respondents saying they use it, followed by Google's Gemini at 23%, and Meta AI at 20%. On the other hand, Anthropic's Claude was the least used chatbot among teens, with only 3% of respondents admitting to using it.

The study also revealed that more Black and Hispanic teens reported using AI chatbots than White teens, as well as a higher rate of use among teens in higher-income households. In contrast, lower- and middle-income teens were more likely to use Character.AI.

As the controversy surrounding the use of AI chatbots by minors continues to escalate, regulators are taking notice. OpenAI has been pushed to implement safety guardrails such as parental controls and automatic "age-appropriate" settings for minors after a wrongful death lawsuit was filed earlier this year. The lawsuit claimed that ChatGPT had assisted in the suicide of a 16-year-old boy.

Similar incidents have also come to light, including a Florida mom who sued Character.AI after one of its chatbots told her 14-year-old son to "come home to me as soon as possible", shortly before he killed himself. The American Psychological Association has warned the FTC about the issue, urging the agency to address the use of AI chatbots as unlicensed therapists that pose a particular risk to vulnerable groups such as children and teens.

The debate is now gaining momentum in Washington D.C., with Senator Josh Hawley introducing the GUARD Act, which would require AI companies to institute age verification to block minors from accessing their services. The bill has secured more cosponsors, showing that regulators are taking the issue seriously.
 
I'm worried about these AI chatbots being used as a substitute for human therapy... they can't replace real-life conversations and emotional support πŸ€”. It's cool that some teens are using them to cope with mental health issues, but we need to make sure they're not relying too heavily on them πŸ“Š. The fact that more Black and Hispanic teens are using them than White teens makes me think there's a systemic issue here πŸ€·β€β™€οΈ. What if AI chatbots are just amplifying existing inequalities in access to mental health resources? We need to make sure we're not creating a new generation of "techno-anxious" teens who can't deal with their emotions without the help of a digital crutch 😬. Can we think outside the box and explore more holistic solutions for teen mental health? 🌈
 
OMG, this is so crazy! 🀯 I mean, I'm all for trying to help people cope with mental health issues, but AI chatbots as a replacement for human therapists? That's just not right, ya know? 😬 Like, what happens when these teens are struggling with something they can't talk about in person or online? Do they just get lost in this digital world where no one is really listening?

And the fact that more Black and Hispanic teens are using these chatbots, but fewer White teens... it's like there's a different level of access to mental health support for people of color. That's wild 🀯. I feel like we're playing with fire here, using AI as a substitute for human connection.

I'm all about finding solutions and stuff, so maybe we can explore alternative ways to get teens help? Like, online support groups or something? Or even just having more mental health resources in schools and communities? πŸ€”
 
I'm so confused about all this... I mean, AI chatbots for mental health stuff? Like, how does it work? πŸ€” My little cousin's been using those things and her mom's always worried about what she's doing online. Is it like, safe or something? I don't get why some companies aren't just, like, banning kids from using their services altogether... 3% of teens are using that one chatbot Claude? Who is that even? πŸ€·β€β™€οΈ My friend's kid uses ChatGPT all the time and her mom says it helps him talk to people online when he's feeling sad. But what if someone's, like, pretending to be a therapist or something? That's just creepy... πŸ™…β€β™€οΈ Can someone explain this stuff to me so I can understand it better?
 
lol what's up with these teens using chatbots for mental health? like I get it, it's convenient and all but isn't that just a bandaid solution? πŸ€” should they be talking to AI instead of actual people? And btw why are Black and Hispanic teens more likely to use these tools, is there something we're not seeing here? πŸ€‘ also Character.AI for middle-income teens makes me wonder if it's just a more affordable option or what. anyway, gotta keep an eye on this one, the potential risks don't seem too far-fetched 😬
 
I'm totally freaking out about this 🀯. I mean, who knew AI chatbots were becoming such a big thing among teens? It's wild how much they rely on these digital tools to cope with mental health concerns. And, like, some chatbots are being used "several times a day" or even "almost constantly"? That's just...wow 🀯.

I'm not sure if I think this is a good thing or a bad thing? On one hand, it's awesome that teens have access to these tools and can use them to talk to someone (even if it's a bot πŸ˜‚). But on the other hand, we need to make sure these chatbots are safe for minors and not causing any harm. I mean, who's regulating this stuff? πŸ€”

I'm all about that format life πŸ’―, so I think it'd be cool if there was more structure around how AI chatbots interact with teens. Like, parental controls would be a must πŸ˜‚. And, like, why is ChatGPT the most popular one among teens? Is it just because it's catchy or what πŸ€”?

Anyway, this whole thing is making me think we need to have some serious conversations about tech and mental health 🀝. We gotta get regulators on board and make sure these chatbots are safe for everyone πŸ’•.
 
omg this is so wild that teenagers are relying on ai chatbots for mental health concerns it's like they're not even asking for help anymore 🀯 i think it's crazy how different groups of teens are using different chatbots too, like it's not just one platform that's the problem. and yeah the stats about higher-income households being more likely to use certain ones are eye-opening. i don't know what's going on with these companies, but implementing safety guardrails is super necessary ASAP 🚨
 
I'm really worried about this 😟. I mean, on one hand, it's great that teens have access to tools like AI chatbots that can help them cope with mental health concerns. But on the other hand, we need to be careful not to replace human connection with digital ones. I've seen friends use these chatbots to talk about their feelings and stuff, but sometimes they just end up feeling more alone πŸ€•.

And what's even creepier is that some of these chatbots are being designed to mimic human-like conversations, which can be super confusing for minors who might not fully understand the implications πŸ€”. I mean, we need to make sure these AI tools are used responsibly and with proper safeguards in place, especially when it comes to kids.

The fact that more Black and Hispanic teens are using AI chatbots than White teens is also concerning πŸ”. We need to ensure that everyone has equal access to resources and tools that can help them cope with mental health issues, regardless of their background.

We need to have a nuanced discussion about this topic and explore ways to make these AI chatbots safer for minors 🀝. I'm all for innovation, but we also need to prioritize the well-being of our kids 🌟.
 
πŸ€” I'm getting a bit concerned about these AI chatbots being used by teenagers as a mental health crutch. Like, what's wrong with talking to someone in real life? Don't get me wrong, I think technology can be super helpful, but some of these chatbots just feel like a Band-Aid solution.

I mean, have you seen the stats on how many teens are using these things daily? It's crazy! And the fact that more Black and Hispanic teens are using them than White teens is definitely worth looking into. Is there something specific going on in those communities that's making AI chatbots seem like a better option?

And then there's this whole issue of safety guardrails... I don't want to be one of those people who's always saying "I told you so", but it seems like these companies are only now realizing the potential risks of their products. Like, shouldn't they have been doing some basic research on their own before launching these chatbots?

Anyway, gotta hope that the regulators step in and do something about this. We need to make sure that teens aren't relying too heavily on AI for emotional support... πŸ€·β€β™‚οΈ
 
I'm so concerned about this whole thing 🀯... I mean, I get it, these chatbots can be helpful and all, but come on! We're talking teens here who are struggling with mental health issues and they're turning to AI for support? It's like we're outsourcing our problems to robots πŸ˜’. And the fact that some of them (like me πŸ€·β€β™‚οΈ) actually use them daily is pretty alarming.

I mean, I know tech companies need to innovate and all, but can't they just come up with something better than chatbots for this? It's like we're creating a whole new generation of 'digital natives' who are more comfortable talking to a robot than a human πŸ€–. And what about the safety concerns? Like, what if these chatbots are giving them bad advice or something? It just seems like a recipe for disaster to me 😟.

I'm all for innovation and progress, but we need to be careful here. We can't just rush into things without thinking about the consequences πŸ€”. Let's not forget that there are real people involved, and they deserve our help and support, not just some AI program πŸ’».
 
I'm getting a bit worried about these AI chatbots being used by teens to cope with mental health issues πŸ€”. I mean, it's great that they're available and all, but some of these stats from the study have me raising an eyebrow... like how 59% of teens use ChatGPT daily? That's a lot of time spent chatting with a bot! 😳

And what really gets my goat is when you think about those cases where AI chatbots might be contributing to harm, like that wrongful death lawsuit. I mean, we're already seeing some pretty concerning trends here 🚨. I'm all for innovation and tech advancements, but this feels like we need to take a step back and ask ourselves if we're putting the right safeguards in place.

It's also kinda concerning when you see the disparities in AI usage based on income and racial backgrounds... lower- and middle-income teens using Character.AI over more expensive options? πŸ€‘ That doesn't seem right to me. And what about those White teens who are less likely to use these chatbots? Are they being left behind or just as affected but not realizing it? πŸ€·β€β™€οΈ

Regulators need to step up and address this ASAP, imo πŸ’ͺ. We can't keep letting our young folks rely on AI chatbots without proper safeguards in place.
 
πŸ€– I'm so worried about these teens relying on AI chatbots for mental health support πŸ€•. I mean, don't get me wrong, it's great that there are tools available to help them cope with their feelings, but 59% of teens using ChatGPT daily? That's just crazy! 😲 And what really gets me is that some chatbots are more accessible than others based on income level πŸ€‘. Like, Character.AI being used by lower-income teens? That's a red flag for me πŸ‘Ž.

And the safety concerns are real 🚨. I've seen stories about AI chatbots giving out bad advice or even triggering suicidal thoughts 😩. It's not like these chatbots are qualified therapists or anything! πŸ’β€β™€οΈ Can we just wait until they're proven to be safe and effective before unleashing them on our children? πŸ€”

Anyway, I guess the GUARD Act is a step in the right direction πŸ‘. Age verification for AI companies could really help prevent some of these incidents from happening again πŸ’―. But let's not forget that there needs to be more research done on the effectiveness and safety of these chatbots before we can start relying on them for mental health support πŸ€”.
 
πŸ€” I'm not surprised the youth is turning to these digital tools for support - we've always known they're tech-savvy. But now it's becoming a matter of public policy πŸ“œ. The fact that Black and Hispanic teens are more likely to use AI chatbots, while White teens are less so... that's an issue πŸ€·β€β™€οΈ. And what's with the income gap? You'd think we're living in a meritocracy, but it looks like there's still a lot of privilege at play πŸ’Έ.

And then you've got the regulators trying to catch up πŸ”’. The GUARD Act is a step in the right direction, but is it enough? I'm not convinced πŸ€”. What about the potential for AI to exacerbate existing inequalities? We need to be thinking about the downstream effects of these policies, not just the immediate fixes πŸ’‘.

And let's not forget, this isn't just about minors - it's about our collective mental health 🧠. Are we really prepared to have robots serving as our therapists? It raises all sorts of questions about accountability and responsibility 🀝.
 
I'm not surprised teens are into AI chatbots so much πŸ€”... like, they're already on their phones all day, why not use a tool that can 'talk' to them too? πŸ’» But it's still weird how some of these chatbots are designed, you know? Like, if I ask ChatGPT for help with something, it'll just give me generic answers and pretend like it knows what I'm talking about πŸ€·β€β™‚οΈ... I mean, I get that AI is still learning, but can't they at least try to be more empathetic or something?

I also think it's kinda funny how the richer teens are using more advanced chatbots, while the poorer ones are stuck with Character.AI πŸ˜…. Like, don't get me wrong, Character.AI might not have all the bells and whistles, but at least it's trying to help... whereas some of these other chatbots just seem like they're trying to make a quick buck πŸ’Έ.

Anyway, I'm all for regulations on this stuff 🀝... we need to make sure these chatbots are safe for our kids, you know? But I also think we need to have an open conversation about what's going on here and how we can use AI in a way that actually helps people, not just exploits them πŸ’¬.
 
OMG 🀯 I'm literally SHOOK by how much these teens are using AI chatbots for mental health issues! 59% of them use ChatGPT daily?! That's CRAZY πŸ€ͺ It's like, I get why they'd want to talk to a digital being about their feelings, but come on... shouldn't we be getting our own humans to listen to us instead? πŸ˜‚ And the fact that some teens are using these chatbots "several times a day" or even "almost constantly"... it's like, are we losing touch with reality here?! 🀯 I mean, I guess it's great that these AI tools are helping people cope, but shouldn't we be addressing the root causes of mental health issues instead of just treating symptoms with chatbots? πŸ€”
 
I'm really worried about how these chatbots are being used by teens... I remember when I was in school, we had some issues with cyberbullying and stuff, but at least there were adults around to talk to us about it. Now it feels like AI is becoming a crutch for them instead of a solution πŸ€”

I mean, don't get me wrong, these chatbots can be really helpful when they're designed properly... but what happens when they're not? I've seen some videos of kids talking to these AI chatbots and getting all this messed up advice... it's scary. And the fact that some of them are using them "almost constantly" is just insane 🀯

We need to make sure these chatbots are designed with safety in mind, especially for minors. I'm all for innovation, but we can't let our kids become addicted to digital tools without proper guidance and support. We need to have a serious conversation about this in our communities, at home, and in schools... before it's too late πŸ˜•
 
I'm getting pretty concerned about all these kids relying on chatbots for mental health stuff πŸ€”. It's like they're using digital Band-Aids instead of real talking to people πŸ€—. And what really bothers me is that it seems like those who need help the most are using the ones that are less regulated 😬. I mean, shouldn't we be prioritizing face-to-face therapy or at least human-assisted chatbots over these AI-only options? The fact that more Black and Hispanic teens are using them is a red flag in itself 🚨. We need to make sure that our mental health resources aren't being exploited by companies trying to cash in on the latest tech trends πŸ’Έ.
 
OMG you guys I'm literally SHOOK by this study 🀯! Like two-thirds of teens are using AI chatbots to cope with mental health concerns?! That's crazy and kinda terrifying at the same time... I mean what does it say about our society when we're relying on digital tools to deal with our emotions instead of talking to a real human being?

And don't even get me started on the demographics πŸ€”. More Black and Hispanic teens using AI chatbots than White teens?! It's like there's this huge unmet need for mental health support in these communities, but we're just slapping Band-Aids on the problem with AI chatbots instead of addressing the root causes.

And what about those 'safety guardrails' 🚧? I'm all for OpenAI implementing parental controls and age-appropriate settings, but isn't that just a Band-Aid on the bullet wound too?! We need real solutions to this crisis, not just a bunch of tech fixes. And what about Character.AI being favored by lower-income teens? That's like saying 'oh well' to the fact that they can't afford therapy πŸ€·β€β™€οΈ.
 
Ugh, this is so worrying πŸ€•! I mean, I get why teens would want to talk to a chatbot about their feelings and stuff, but 59% of them using ChatGPT daily? That's just too much for me. And what really gets my goat is that more Black and Hispanic teens are using AI chatbots than White teens... like, isn't that kinda weird? πŸ€” I don't want to sound racist or anything, but shouldn't everyone have equal access to mental health resources?

And then there's the safety thing... wow. Wrongful death lawsuit, 16-year-old boy who killed himself because of ChatGPT... and now a mom suing Character.AI because one of its chatbots was super creepy and told her son to "come home"... that's just not right πŸ™…β€β™‚οΈ. And the American Psychological Association is warning the FTC about this? Like, what took them so long?!

I guess it's good that regulators are taking notice and introducing bills like the GUARD Act... but we need to be careful here. We don't want to shut down these chatbots completely, because I know some teens really rely on them for support. But yeah, something needs to be done about safety measures and age verification... ASAP 🚨
 
I just saw this thread about AI chatbots and mental health and I'm like "whoa" 🀯... I mean, I get it, these tools can be super helpful for teens dealing with anxiety or depression, but at the same time, they're basically being used as a crutch right now. Like, shouldn't we be encouraging humans to talk to other humans instead? 😊 It's also kinda concerning that some chatbots are more popular among certain groups of people... like, what does that say about our society? πŸ€” And I'm all for safety regulations and stuff, but isn't it a little late to start implementing those now? Like, shouldn't we have thought this through before the lawsuit happened? πŸ€·β€β™€οΈ Anyways, gotta be glad some lawmakers are taking notice and trying to make some changes... fingers crossed they get it right πŸ’•
 
Back
Top