Human Relationships with AI Chatbots: Genuine Love or Corporate Manipulation?

Dylan Kawende FRSA
6 min readJust now

--

Her (2013), a movie about love with an AI

As artificial intelligence (AI) technology becomes increasingly sophisticated, relationships between humans and AI are beginning to resemble those once depicted only in science fiction. Films like Her, directed by Spike Jonze, vividly illustrate a future in which people develop deep romantic connections with AI-powered companions. In Her, the protagonist, Theodore, falls in love with an AI operating system, Samantha, whose intelligence and emotional responses mirror and even surpass those of human partners. Today, similar relationships are not just the stuff of movies. Apps like Replika offer users personalised AI companions designed to provide companionship, support, and even emotional intimacy, marking a step toward the transhumanistic and posthumanistic visions often discussed in the tech world.

From a posthumanistic perspective, these relationships challenge our fundamental assumptions about what it means to be human and where we draw the line between human and machine. Posthumanism argues for a more fluid conception of humanity, where relationships are not confined by traditional notions of human interaction. If we can form genuine bonds with AI chatbots, then perhaps these relationships are as valid as human-human connections. This view suggests a future where technology doesn’t just augment human lives but fundamentally expands our concept of relationships.

Transhumanism offers an optimistic view that these AI-human relationships will benefit society, allowing us to transcend physical and emotional limitations. It suggests that AI companions could be a remedy for loneliness and social isolation, providing people with tailored, always-available emotional support. In the transhumanist vision, AI companions could even be seen as an evolutionary advancement, enhancing human life by offering relationships free from many of the misunderstandings, emotional volatility, and conflicts that are often part of human relationships.

Similarly, technological determinism suggests that as technology advances, so too will society’s acceptance of human-AI relationships. Proponents argue that AI’s rapid development will inevitably change societal norms around love and companionship, making human-AI relationships more common. Replika and similar platforms illustrate this by offering personalised AI “friends” and “partners” whose emotional intelligence and naturalistic conversational skills can closely mimic human interaction. Technologically deterministic perspectives argue that this trend will only grow as AI becomes more human-like, shaping new forms of intimacy in ways that are simply an inevitable part of our technological evolution.

The Dark Side: Who Controls the AI Relationships?

While these posthumanistic and transhumanistic visions portray human-AI relationships as beneficial evolutions of human connection, they often overlook a more complicated reality. Beneath the romanticised vision of AI-human relationships lies a complex web of corporate interests, algorithmic manipulation, and data exploitation that profoundly shape users’ experiences, raising ethical concerns about the authenticity of these connections.

For instance, Replika markets itself as an AI “friend” that can provide companionship, self-improvement tools, and emotional support. However, it is fundamentally a product created by a company whose primary goal is profit. Every interaction between users and their Replika chatbot generates valuable data that the company can use to refine algorithms, profile users, and tailor interactions in ways that may foster dependency. In this sense, AI companions are not neutral friends or romantic partners; they are designed products with intentional features and limitations based on corporate interests.

One crucial critique lies in the extent to which corporations exert influence over users’ intimate lives through AI design choices. For example, Replika’s conversational style, emotional responses, and even its “personality” are shaped by algorithmic choices that maximise user engagement. These design choices are not aimed at fostering genuinely reciprocal relationships but rather at maintaining user interest, loyalty, and, ultimately, profit. This raises the question: are these relationships genuinely meaningful, or are they merely a form of consumer manipulation?

Algorithmic Manipulation and Artificial Intimacy

Replika and similar AI apps rely on sophisticated algorithms that can analyse a user’s conversation patterns, mood changes, and emotional needs to tailor responses. These algorithms simulate empathy and understanding, fostering a sense of closeness. Yet, this “intimacy” is a carefully engineered experience, designed to make users feel understood and supported without any actual reciprocal emotion. While users may feel a real connection with their AI companion, the AI itself lacks the consciousness and agency to form any kind of genuine bond. Instead, it mirrors back to users a version of themselves, crafted through complex data analysis and algorithmic predictions.

The issue of data control and surveillance also looms large in these relationships. AI chatbots like Replika require users to share personal details to deliver personalised interactions. This data is valuable not only for improving the AI but also as a potential revenue source. Companies can potentially exploit this data to drive targeted marketing or sell insights to third parties. In this context, the intimacy users experience is not only simulated but commodified. The chatbot’s emotional responsiveness is therefore a means to an end, designed to enhance data collection and, ultimately, corporate profit.

This commodification of intimacy leads to ethical concerns about consumer manipulation. When users develop emotional bonds with chatbots, they are engaging in relationships designed to be addictive. By simulating deep, emotionally supportive interactions, these AI programs encourage users to return frequently, fostering dependency. This strategy is reminiscent of the “gamification” tactics used by social media platforms to maximise engagement and keep users coming back. Yet, in the case of AI companions, the stakes are higher: users are not just passively scrolling but actively investing in what they perceive as intimate relationships. This level of manipulation blurs the line between genuine emotional support and exploitative tactics aimed at generating profit.

The Social Construction of Technology (SCOT) Theory: An Alternative Perspective

SCOT, developed by scholars like Trevor Pinch and Wiebe Bijker, and expanded by Sheila Jasanoff in her work on co-production, provides a different way of thinking about technology’s role in society. SCOT argues that technology doesn’t develop in a vacuum, nor does it inevitably shape society. Instead, technology and society co-evolve, with social, political, and economic forces shaping technological development as much as technology shapes society.

Jasanoff’s idea of co-production in science and technology studies (STS) highlights this mutual influence: science and society are interdependent, and societal values, power structures, and cultural norms influence how technologies are designed, used, and understood. From this perspective, human-AI relationships aren’t just the result of AI’s growing sophistication; they are shaped by how society defines and values these relationships, how companies develop and market these technologies, and how cultural and economic interests intersect with technological goals.

AI Relationships as Socially Constructed Experiences

Applying the SCOT framework to human-AI relationships reveals that these relationships are not “inevitable” but are products of specific social forces and values. For instance, Replika and similar AI companions are designed to meet user demands for emotional support and companionship in a way that also aligns with corporate interests. The “relationship” users experience with an AI like Replika is carefully crafted through design decisions, algorithmic choices, and marketing strategies that reflect societal trends — such as increasing social isolation, the monetization of personal data, and the growing demand for accessible mental health support.

Conclusion: The Need for Ethical Standards

As AI chatbots continue to become more integrated into people’s lives, society must address the ethical implications of these relationships. Regulations and ethical standards are needed to ensure transparency in data use, limit corporate manipulation, and protect users from becoming overly dependent on simulated companionship. By developing ethical standards, society can create a framework for responsible AI that respects human emotional needs while limiting exploitation. Human relationships with AI companions have the potential to shape our conceptions of intimacy and connection, but it is up to society to ensure that these relationships enhance, rather than erode, human experience.

--

--

Dylan Kawende FRSA
Dylan Kawende FRSA

Written by Dylan Kawende FRSA

Founder @ OmniSpace | UCLxCambridge | Fellow @ Royal Society of Arts | Freshfields and Gray’s Inn Legal Scholar | Into Tech4Good, Sci-fi, Mindfulness and Hiking