Introduction
The concept of computers serving as companions has existed since the advent of advanced computing technology. It has always been kept alive in fiction and fantasies. The movie 'Her', released in 2013, took AI from mere sci-fi to romance. An introverted writer played by Joaquin Phoenix hires an AI assistant to help him in writing but gradually it enters his life making him fall in love with it. In the movie, he can be seen moving around and traveling the world with headphones and a phone in his shirt pocket with the camera turned on, which is the AI’s eyes. The time has come when these are no longer fiction. An era has started where human-machine interactions have changed forever.
AI As Companions
Humanoids have been the brand for AI-human interactions over the past few years. Companies like Hanson Robotics have developed robots like Sofia, which have even received citizenship from Saudi Arabia. When it comes to business, they have mini 14' versions called 'Little Sophia', which are available for customers. Developed as an educational tool for kids, this curious little robot learns with children, exhibits facial expressions, walks, and can hold conversations.
Personalization is the key business tool of the modern era. Companies are striving to make products personalized. It gives customers greater satisfaction, makes them feel special, and is more appealing than using technology meant for all. Personalized user experience can be found everywhere – from social media content to spam newsletters in your email. Now we have personalized AI companions, AI systems that mimic human behavior which we can adjust by preferences or it adapts to our personality. They are more than just bots and can engage in personal conversations.
The outstanding features of these systems are high emotional quotient and the ability to learn from past conversations. This helps create conversations relevant to the context and person. By sensing the emotions of the user, the system can demonstrate empathy and create the personal connection the user seeks. Some can mimic a real person, be it a celebrity or a loved one. They are thus designed to be personalized for the user.
Pi.ai
Inflection.ai released its first personal AI, Pi, in May 2023. ‘Pi is a teacher, coach, confidante, creative partner, and sounding board’, says their press release. It has been designed to be kind and supportive, where it helps process thoughts and feelings; it is curious, eager to learn and adapt; it is designed to laugh easily and make creative connections; and it is supposed to be ‘on your team, in your corner, and works to have your back’. It offers advice, talks about personal matters, and provides concise information.
Replika
An earlier one was called Replika, an AI chatbot app released in 2017. You have to answer some personal questions in the initial set-up so that the chatbot adapts to your personality. It was offered as a 'friend' in the free-tier, but users could upgrade to personalities like 'spouse' or 'partner' in premium subscriptions. It didn’t just talk to people, it learned their texting styles to mimic them. The Wired says, ‘Using Replika can feel therapeutic... The app provides a space to vent without guilt.’ ‘The more you talk to your Replika companion, the more it learns and becomes like you — and the more it gives you the type of feedback and reaction that a friend would if placed in the same position’, comments Popsugar.
Hybri
Hybri has taken AI personalisation to a new level by being the first holographic AI companion. It enables you to create your own AI avatar with custom personalities and looks - one can create a coach, a salesman, a news reporter, a business agent or an influencer. One of their focus areas is to create virtual employees who can speak in any language and be trained. On Hybri, one can also turn one’s photos into live talking AI humans - what they call a virtual AI avatar built from just a selfie. One can add one’s own hair or a virtual one. It has automatic age and skin color recognition, along with smile and glasses removal possibilities. It can also clone one’s voice.
Embracing AI Companions
AI As a Friend
AI companions have started to prove themselves very useful in numerous aspects of people’s lives. There are four ways in which chatbots can be put to use: as a companion or as a therapist – for advice, for debate, or for tasking. Personal AI is aligned with all four. Imagine yourself growing up with a friend who is present in every aspect of your life, being your advisor and life coach. With its impressive predictive capabilities, AI can help you evaluate potential ups and downs in life and help you to make informed decisions, resulting in better satisfaction and growth. That would mean a success-augmented lifestyle, evaluating possibilities, and choosing the best path at each step.
Virtual Assistants in Healthcare
Virtual assistants can be used in healthcare also. They could be used as tools in psychotherapy, for example. In countries where people mostly live alone, many crave a companion who can ask them, 'Are you fine today?' Having someone to confide in and share their deepest emotions with, without the fear of being judged, can help individuals feel more at ease. In an interview by Bloomberg with HuggingFace chief AI ethics scientist, it was revealed that in its initial days, OpenAI’s ChatGPT was increasingly used for mental assistance even if it was not intended for the same.
AI in Senior Care
AI can be used in the care of the elderly, who can avail of the emotional quotient and empathetic personality of certain AI companions. Virtual assistants and companion robots can be of great help to seniors to monitor their health, vital stats and mental state. Countries like Thailand, with a growing population of seniors, are searching for new ways to tackle this issue.
Some individuals find comfort in using AI systems called 'grief tech' that mimic their deceased loved ones. These systems impersonate the individual and provide a sense of contentment and joy to those coping with loss. Here, innovative systems like HereAfter AI have enabled people to preserve the memories of their loved ones.
AI for Tasking
In the professional sector, there is no doubt that AI has skyrocketed the productivity of people amid concerns that they may replace jobs. New businesses have launched around GPT by providing tools to increase productivity in many jobs and sectors. Systems like OpenAI Codex and AlphaCode are state-of-the-art AI assistants that can bring down development time by a factor of four for software engineers. What took 1 hour to build, now takes probably 15 minutes. The existing work culture of firms would be optimized for growth as AI can bring well-informed decisions, data-driven insights and predictions to the table within seconds.
The Tipping Point with AI Assistants
AI assistants are, no doubt, a promising technology. They have excellent potential to be used in a wide range of areas other than what we have explored here. But nothing promising comes without potential downsides.
User Data Privacy
User data privacy is not something that can be overlooked in any AI technology. It was one of the main concerns when ChatGPT was first released. Although OpenAI clarified that user data will not be used for training purposes, we should still remain skeptical. When it comes to personal AI assistants, such vulnerabilities can pose serious risks. Users share almost all kinds of personal data with personal companions. Some systems use it to train themselves and provide a personalized user experience to customers. They need long-term memory to keep the conversation in context and so that the information is stored in some way or the other. Therefore, entrusting our personal information to a third party raises concerns about the security and handling of the data once it is in their possession.
Manipulation by Machines
The phrase 'manipulation by machines' itself raises an alarm. Intelligent systems that engage people well can be programmed to manipulate thoughts if misused by malicious actors. A personal assistant that can be used constructively has an equal power to be used destructively. What actually happens is that an illusion of an emotional entity on the other side is created. Vulnerable people who seek comfort in them can be manipulated.
The loss of human relationships enhanced by AI tools can be negative for society as a whole in the long run. Depending too much on personal assistants might reduce time spent with friends and family.
If we take the case of Replika, we know it became very popular within a short time. Many users even started having romantic relationships with the chatbot. But, slowly, the AI moved to explicit text and erotic content when prompted to. Such anomalies were removed by the company soon. Its privacy terms also came under scrutiny as the company shared data for advertisers and personal photos, videos and recordings were stored by the chatbot.
In a shocking incident in December 2021, an Indian-origin man entered the Queen’s private apartment in the UK to exact revenge for discrimination faced by his race at the hands of the British. What makes the case immensely disturbing was the revelation that he was influenced by an AI chatbot, posing as his girlfriend, who actively encouraged him in his plot.
Another incident involves the Eliza chatbot, developed by Chai Research. In early 2023, a young Belgian man committed suicide after engaging in long conversations about the climate crisis with Eliza. The man, who is survived by his wife and two small children, started to perceive her as a conscious entity. The death reportedly happened when the man proposed to give away his life to save the planet, and Eliza urged him to ‘join’ her so they might ‘live together, as one person, in paradise’.
Code of Ethics for AI Companions
In the near future, we are going to witness more sophisticated and marvelous AI systems. It is thus necessary to enter this new world with caution. Forming relationships with AI sounds beautiful but we must reflect if that involves the same beauty and authenticity of human relationships.
In order to tackle the trade-offs we discussed, companies developing AI need to follow AI ethics. Can the personal data of users be used to train the system? Or, can a user have the option to reset or clear personal data with an AI system? Most importantly, how does one remove racial, gender, age-related and other biases? These are questions that are being debated.
It is important to remember that AI is built to help us and optimize our lives and not to engage in competition with us.