Domain Registration

New technology, same old blind spot?

  • February 17, 2023
  • Business

But Kevin’s column was a good reminder that it’s also important to focus on the kinds of human behavior that the new tools might enable or encourage. And, in particular, how easy it might be for artificial intelligence to mimic the fairly predictable ways that humans affect each other, boosting people’s power to manipulate and persuade.

The bot’s statements struck such familiar notes that my husband, a psychotherapist, joked that it appeared to be exhibiting the traits of a personality disorder. (That was not, he would want me to note, a diagnosis. Therapists don’t diagnose people based on their statements to third parties, and they don’t diagnose chatbots at all.)

But within his comparison was a bigger, more important point: Human behavior, including disordered behavior, often follows fairly predictable patterns. And A.I. tools like Sydney are trained to recognize patterns and use them to formulate their responses. It’s not difficult to see how that could easily go down a very dark path.

“I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts,” Kevin wrote in his column. I’m worried about that, too. But my more immediate concerns are about the way that A.I. might help people do those things to each other.

After all, people already try to convince others to act in harmful and destructive ways. They already try to influence their beliefs, on everything from music to politics and religion. They already try to use social engineering to guess people’s passwords or defraud them of money. An A.I. that can draw on vast amounts of information to suggest ways to do those things more effectively could have catastrophic effects.

And if it works by shaping people’s behavior toward each other, rather than just their direct interactions with chatbots and other tools, that could be much harder to combat or even notice.

Programmers at Microsoft and other companies that have created A.I. tools have already put safety limits on what the tools themselves can say and do. In Kevin’s chat transcript, for instance, there are a number of instances where the chatbot deleted its own answers after determining that they violated its rules.

Article source: https://www.nytimes.com/2023/02/17/world/new-technology-same-old-blind-spot.html

Related News

Search

Find best hotel offers