chatbots that are not without risk
Deadbots are a category of chatbots that offer the possibility of talking to dead people. The first of these were developed before the advent of ChatGPT, and in hindsight, they are not without their risks. I analyze them in this article and examine the question of criminal liability with them.
Deadbots are a new application of artificial intelligence.
They are chatbots that allow you to converse with dead people by imitating their responses. Their development is part of a more global context of virtualizing relationships and blurring the boundaries between the physical and digital worlds. Several recent examples show that interacting with these artificial intelligences can be dangerous and have terrible consequences. This raises the question of AI’s criminal liability.
Joshua Barbeau spoke with his girlfriend, who had died 8 years ago
In August 2021, the San Francisco Chronicles published the story of Joshua Barbeau, a 33-year-old Canadian. Eight years earlier, the young woman he was in a relationship with, Jessica Pereira, died of a rare disease.
Wandering on the internet, Joshua stumbles upon “Project December,” a chatbot designed by an independent programmer named Jason Rohrer.
Then the unexpected happened. After opening an account, Joshua Barbeau “fed” the chatbot with messages he had saved from his deceased girlfriend. Based on Google’s GPT-3 library, the chatbot mimicked Jessica’s style perfectly and produced surprisingly … human responses.
Life after death: it is already possible … thanks to artificial intelligence
Joshua Barbeau’s example shows us that the advances in artificial intelligence hold some surprises for future applications. The difficulty of mourning and the variety of cults around death are so many opportunities to satisfy a request that was science fiction until now. Moreover, companies have not waited to see a financial windfall. The recent patent filed by Microsoft to develop a chatbot that would go as far as imitating the voice of the deceased person is an example.
Another disturbing experiment was realized in South Korea in 2020 using the combination of virtual reality and artificial intelligence (see video above). Jang Ji-Sung reunited for a brief moment with his daughter, who died 3 years earlier from a blood disease. The video below, which shows when the virtual meets the real, is disturbing and highly moving. It clearly shows all the questions that can arise from artificial intelligence to overcome the limits of death.
Artificial intelligence is also used to bring family photos back to life. Deep Nostalgia is a service proposed by MyHeritage that allows animating photos of deceased people. The result is impressive but can produce controversial results, as shown in the video below.
The ethical challenges of deadbots
As you can see, bringing the dead back to life poses several ethical problems. France was one of the first countries to take an interest in deadbots from a legislative point of view. In November 2021, the ethics committee issued an advisory opinion to the Prime Minister. This opinion includes a specific chapter for deadbots and raises several issues:
- Consent of the deceased for the use of his data after his death
- Risks resulting from the usurpation of the person’s identity (living or deceased)
- Psychological impact on the person who converses with the deceased
The progress of artificial intelligence is a blessing for many fields. Algorithms are omnipresent and simplify our digital life, for example, by recommending us information content. However, each progress brings its share of deviances, as shown by these examples of hacking.
With deadbots, computer scientists are venturing into dangerous territory. Death is not a “domain” like the others. The human being became human when he became aware of his death. It is this consciousness that deadbots risk disturbing, despite all the precautions that we can take, by investing the field of the afterlife, this time artificial intelligence risks to make us regress.
The criminal liability of AI: the case of Character.ai
A teenager committed suicide after becoming addicted to his chatbot based on generative AI. This is the story of Character.ai, an application bought by Google that lets you converse with real people, living or dead, and fictional characters. Character.ai lets you talk to Napoleon, Alan Turing, and Robert Nixon. The company is facing a lawsuit brought by the teenager’s mother. She accuses the company of having marketed a product that is not risk-free for children and failing to warn parents.
Unfortunately, this case is not the first of its kind. In June 2023, a Belgian father also committed suicide after conversing for 6 weeks with a chatbot named Eliza.
So what about the legal liability of conversational AI in general and of these chatbots that imitate humans in particular? They escape all regulations, even in Europe. The Digital Services Act (DSA) covers social media liability, and these applications are not part of it. As for the AI Act, it does not classify chatbots as high-risk systems. It is, therefore, impossible to show that they are in breach of any regulations requiring them to control risk.
from Market Insights – Techyrack Hub https://ift.tt/WZgtR4B
via IFTTT
Comments
Post a Comment