Why Journalists Should Take a Break from Talking to Robots
We need a moratorium on talking to Chatbots until they stop turning banal tech into spectacle and fear mongering
Recently, several tech journalists got scared of robots and decided to write about it. In one case, the story made front page news of the paper of record. The New York Times’ Kevin Roose posted a conversation he had with Bing’s new search AI chatbot and he became worried about the tech. Unfortunately, Roose’s article does very little to educate readers about the technology and instead dolls out some fear mongering about a ghost in the machine.
“When the Google engineer Blake Lemoine was fired last year after claiming that one of the company’s A.I. models, LaMDA, was sentient, I rolled my eyes at Mr. Lemoine’s credulity,” Roose explains. He then follows this up with, “Still, I’m not exaggerating when I say my two-hour conversation with Sydney [the nickname he gave the Bing bot] was the strangest experience I’ve ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward.”
Roose explains that his fear is not about the possibility of an AI starting the robot wars, but rather that these generative bots can influence users into acting out inappropriately.
Disastrously missing from the article is the explanation as to why the bot is acting this way. It’s a fairly banal and easy explanation too: these bots are influenced… by us. The users. I’m sure Roose knows this, but the spectacle of unease makes a better story.
However, this is not to say that the robot isn’t problematic.
These robots are a combination and compression of thousands of inputs from users, testers, journalists, and people who aim to mis-train the robot. Did we forgot about Tay? Tay was Microsoft’s earlier ill-conceived chatbot that “lived” just over 24 hours and had to be shut down…