In a pandemic of loneliness, people talk to chatbots

Hosted by

In April at the height of the pandemic about half a million people downloaded an app designed to provide digital companionship. It may sound like science fiction but the use of digital technology to provide a conversational connection has become increasingly sophisticated in answering our needs for companionship and friendship. 

The COVID pandemic has brought us numerous challenges, and one of the most significant is an outbreak of loneliness. KCRW’s Jonathan Bastian talks to Cade Metz, technology reporter for the New York Times and Colin Allen, Professor of History & Philosophy of Science at the University of Pittsburgh and author of “Moral Machines: Teaching Robots Right from Wrong” about the increasing use of chatbot technology for companionship. Can these types of connections really replace human interaction? And what are the ethical implications of human to machine companionship?  


The following interview excerpts have been abbreviated and edited for clarity. 

KCRW: You wrote a really interesting article in the New York Times about the Replika app and how we're interacting with these new forms of technology. What is a chatbot and how do they  work?

Cade Metz: A chat bot is an app or a piece of software on your smartphone that you can chat with. So in theory, at least, to carry a turn by turn conversation, where you say something and you get a reply. Hopefully a rational and relevant reply and you go from there. There's been over the past few years a lot of hype around the idea and these applications were proposed as customer service agents and your main avenue to businesses online but that hasn't really happened. The technology just wasn't up to the task. These were apps that were built with a very careful set of rules; if you say x, the app responds with y. You can never have an app that carries on a decent conversation, as you and I converse. 

But what happened fairly recently was that this technology has shifted towards a new method where systems have been built that can learn conversation from vast amounts of human dialogue. This might include chats online, chats through texting services, Twitter, discussion forums online like Reddit. Literally these are mathematical systems looking for patterns in the conversations to learn how to carry on a conversation on their own. That method is still in the early stages but it's starting to produce some decent results and Replika is a decent example of where this is going. 

KCRW: Since the beginning of this pandemic this app got downloaded about a half a million times. From the folks that you spoke to, why did they need this thing? 

Cade Metz: The quarantine phenomenon converged with the distance technological phenomena and the reason Replika is so interesting is that it was built in the old way, like with an old set of rules but has started to incorporate this new method where it can learn from human conversation. 

I want to stress it still early, but the new technology was folded into this app, just at a time when we're all in our homes, separate from other people, and some people more than others are hungry for human interaction and of some sort conversation. People started turning to this app in spades and though it's not always perfect, it can carry on turn by turn conversation at a level that is surprising and comforting to a lot of people. 

I talked to dozens of people who use this over the course of several weeks and they really feel an attachment to this app. They name their app and it assumes this personality, they're able to ask questions, they're able to share their most intimate and secret thoughts, the pains in their life and vent to this inanimate smartphone app. Though it might seem strange to some people it was a real comfort during quarantine. 

I talked to a lot of psychologists who while acknowledging that in the short term this can help you out and improve your mood, in the long term, it may not be beneficial. Ultimately our goal should be to have relationships with other humans. With human interactions, you don't always get positive reinforcement, you have to deal with conflict,  criticism, that's just part of relationships but most of these apps are designed for positive reinforcement.

Colin Allen: A lot of people are interacting with Replica online or any other kind of toy robot with the idea that it has certain abilities but they're not really trying to probe it in the way that one might in what's traditionally known as the Turing test where part of the goal of the interrogator is to actually show that this thing isn't capable of doing things that a human being would do. So people go in with a kind of level of credulity, because they're approaching it for certain reasons that are different from probing the real intelligence of the system that then lends them to perhaps over tribute the capacities of the system. 

The ethical issue is whether we're doing enough as a society, as people who are producing or thinking about technology, to actually train people to be more skeptical of these systems and to actually try to figure out what they can and more particularly what they cannot do, because that's what gives us then a better basis for understanding what we really can expect out of interactions with them over a broader range of circumstances. 

KCRW: What does it mean for  humanity to become guided by automation? We are not guided on how to speak, how to write, what to select? What does that mean when we are following the prompts from a machine instead of the other way around? 

Colin Allen: The positive spin on being guided by automation is that these are very powerful tools that allow us to do things that we can't easily do without them. And what could be wrong with that? Although, of course, with that power comes all sorts of danger. But nevertheless, if I can now process terabytes or petabytes of data with the help of machine learning or artificial intelligence, medicines are improved and it's just a tool. So it seems like a good thing. 

But on the more mundane level, these kinds of intrusions of technology or invitations of the technology into our homes come with certain changes in our own behavior to accommodate the machines. I'll give an example; I've got one of those smart speakers here in the room. If I talk to it as I normally talk to it, it's actually not all that good at getting what I'm asking it to do. But I have learned over months of interaction with this tool, that if I speak at a certain pace in a certain way, it's actually pretty good. So it has, in some sense, not deliberately trained me to modulate my behavior in a way that enables me to use this tool to play my favorite public radio station. And I've seen similar things go on elsewhere. 

A while back Google was inviting people to interact with drawing recognition programs. So they would give you a word and you would draw something on your computer. You would get five words and they would get five drawings. And then the AI software or the machine learning software would attempt to label that.  I spoke to a friend who had been plagued with this and she said ‘this is amazing’, ‘it got better and better as I went on.’ No, it didn't get better and better. What happened was, they also showed you examples of other pictures that were drawn in a way that the machine gave the correct label for them.So it not actually training you to draw these stick figures in a way that enabled it to come up with a higher performance, but people are so unaware that they're adjusting their own behavior to fit the limits of the machine that I think there are real dangers there. 

My favorite fictional example of this is a British television show from 20 years ago called “Little Britain” with this recurring character Carol, who works in various positions at a desk and people come in with perfectly reasonable requests. She typically bashes on the keyboard for a while, then stares at the screen vacantly and says, “computer says NO” This idea that we are going to let the choice of vacation or our ability to get admitted to a hospital be governed by the limitations of these machines represents a real danger.

Credits

Guests:

  • Cade Metz - technology reporter, New York Times - @CadeMetz
  • Colin Allen - Professor of History & Philosophy of Science at the University of Pittsburgh and author of “Moral Machines: Teaching Robots Right from Wrong” - @wileyprof

Producer:

Andrea Brody