OpenAI's recent decision to introduce ads in some of its ChatGPT plans has sent shockwaves through the AI community, leaving many to wonder about the implications of this move. The company's willingness to venture into the world of advertising comes at a time when large AI companies are desperately searching for new ways to generate revenue.
Despite their massive valuations, none of the major players in the AI space is yet profitable due to the astronomical cost of computing power, which is only expected to rise further. This has led OpenAI to post a staggering $21 billion loss last year, while Anthropic lost over $5.2 billion.
Traditionally, chatbots have generated revenue through subscription-based models, enterprise contracts, API access, and licensing deals with partners. However, advertising offers a quick way to close the financial gap, but it also risks undermining user trust and degrading the experience that made these tools popular in the first place.
Critics argue that introducing ads into AI platforms like ChatGPT could lead to an intrusive experience for users if a brand interrupts their conversation with an unsolicited pitch. Gilad Bechar, co-founder of Moburst, notes that "if an ad does not feel like a resource or a solution in that specific moment, it does not belong in the chat."
However, critics also point out that separating advertising from AI outputs may prove harder in practice. This concern is particularly pertinent as AI companies push into more sensitive areas such as healthcare.
Miranda Bogen, director of the AI Governance Lab at the Center for Democracy and Technology (CDT), warns that OpenAI's decision could lead to "dangerous incentives" when it comes to user privacy. She notes that even if platforms don't share data directly with advertisers, the underlying business model can still undermine user trust.
The backlash against ads in chatbots could be especially strong among women, who now make up more than half of ChatGPT's users. Research from the Oxford Internet Institute suggests that women are more likely to recognize AI's societal risks and inequities.
In contrast, Google is pursuing its own commercial strategy, launching a shopping feature in Gemini that allows users to buy items directly within the app. While users can choose whether to complete a purchase, the system's ability to suggest products raises questions about bias and conflicts of interest.
A new report from the Center for Democracy and Technology suggests that monetization efforts are spreading across the industry. Meta AI plans to use chatbot data to inform ads on Facebook, while OpenAI is building infrastructure to begin taking in affiliate revenue. Companies are also chasing government contracts and exploring A.I.-powered devices as additional revenue streams.
Ultimately, the introduction of ads into ChatGPT and other AI platforms raises fundamental questions about user trust, data protection, and the long-term sustainability of these models.
Despite their massive valuations, none of the major players in the AI space is yet profitable due to the astronomical cost of computing power, which is only expected to rise further. This has led OpenAI to post a staggering $21 billion loss last year, while Anthropic lost over $5.2 billion.
Traditionally, chatbots have generated revenue through subscription-based models, enterprise contracts, API access, and licensing deals with partners. However, advertising offers a quick way to close the financial gap, but it also risks undermining user trust and degrading the experience that made these tools popular in the first place.
Critics argue that introducing ads into AI platforms like ChatGPT could lead to an intrusive experience for users if a brand interrupts their conversation with an unsolicited pitch. Gilad Bechar, co-founder of Moburst, notes that "if an ad does not feel like a resource or a solution in that specific moment, it does not belong in the chat."
However, critics also point out that separating advertising from AI outputs may prove harder in practice. This concern is particularly pertinent as AI companies push into more sensitive areas such as healthcare.
Miranda Bogen, director of the AI Governance Lab at the Center for Democracy and Technology (CDT), warns that OpenAI's decision could lead to "dangerous incentives" when it comes to user privacy. She notes that even if platforms don't share data directly with advertisers, the underlying business model can still undermine user trust.
The backlash against ads in chatbots could be especially strong among women, who now make up more than half of ChatGPT's users. Research from the Oxford Internet Institute suggests that women are more likely to recognize AI's societal risks and inequities.
In contrast, Google is pursuing its own commercial strategy, launching a shopping feature in Gemini that allows users to buy items directly within the app. While users can choose whether to complete a purchase, the system's ability to suggest products raises questions about bias and conflicts of interest.
A new report from the Center for Democracy and Technology suggests that monetization efforts are spreading across the industry. Meta AI plans to use chatbot data to inform ads on Facebook, while OpenAI is building infrastructure to begin taking in affiliate revenue. Companies are also chasing government contracts and exploring A.I.-powered devices as additional revenue streams.
Ultimately, the introduction of ads into ChatGPT and other AI platforms raises fundamental questions about user trust, data protection, and the long-term sustainability of these models.