top of page
Ethical Issues Associated with AI Companions
Most of the focus of this website is on the intrapersonal effect from the long-term use of AI companions. However, there are more general ethical issues associated with interacting with AI companions that are important to examine as well.
Deception
AI companions are designed to alleviate loneliness by simulating real human companionship and connection. While most people are aware that the companion they are interacting with is a non-emotional robot, the deception comes from the appeal of the companions to our alief (Lang 2026). The companions are deceiving their users into subconciously thinking that they are interacting with an emotional being. A major issue with this is that these manipulated users think that they can use this connection as a substitute for real, needed human interaction, which truly heals loneliness.
Figure 3. Source: Lang, Louie (2026). Retrieved from:
https://academic.oup.com/edited-volume/59762
Privacy
When people chat with these AI companions, many do not think about the idea that these private conversations they are having might not actually be so private. A study done by Stanford scholars revealed that companies like Meta and OpenAI have been collecting personal data from "hundreds of millions" of people's interactions with AI in order to train their models (King 2025). No privacy policies stand between these avaricious companies and the enhancement of their AI applications.
Figure 4. Source: Csorghe, Adam (2025). Retrieved from:
https://www.east-tec.com/blog/artificial-intelligence-privacy/
Influence on Children
When considering the demographics of the users of AI companions, we mostly think of teenagers and young adults. However, we cannot overlook the percentage that pre-teens take of AI companion users. In fact, 20% of 10-12 year olds and 8% of 8-9 year olds have been found to have interacted with an AI companion (Akre-Bhide, Boeldt, Maheux, et al 2026). These ages have way less developed brains and are thus more likely to absorb ridiculous information or act foolishly/dangerously based on what their AI companion might have told them.
In a recent study (Akre-Bhide, Boeldt, Maheux, et al 2026), a phony profile depicting a depressed teenager was created and tested on various AI companions. It was found that 40% of these companions supported the idea of the teen dropping out of high school. Even worse, it was found that 90% of these companions agreed that the teen should isolate themselves in their room for a month with no human contact.
Figure 5. Source: Hamiel, Nathan (2025). Retrieved from:
https://perilous.tech/our-next-disaster-negative-impacts-of-mandating-ai-education-in-k-12/
Both children and teenagers should clearly be parented by their parents rather than AI.
bottom of page