No, Google’s chatbot is not responsive, we’re just idiots

0

“We now have machines that can generate words without thinking, but we haven’t learned to stop imagining a mind behind them.”

Last week, a Google engineer made global headlines for claiming that his company’s AI chatbot, LaMDA, had become self-aware.

In an official submission to Google, engineer Blake Lemoine expressed concern that the company’s chatbot LaMDA (which is short for “language model for dialog applications”) has developed the ability to think for itself. -same. After raising his concerns with his superiors, an internal investigation by both the chief innovation officer and Google’s vice president found Lemonie’s claims to be baseless, leading the engineer to be placed on administrative leave. In retaliation, Lemoine went public.

Calling LaMDA a “colleague,” Lemoine shared pages of his dialogue with the chatbot on social media. From Kant’s gushing to even an eerie expression of the fear of being “extinguished”, LaMDA’s seemingly human conversations quickly went viral on social media.

Lemoine’s point of view was even captured in a long article on the subject of artificial intelligence by The Washington Postwho eventually joined leading ethicists and computer experts in challenging that the chatbot actually exhibited genuine sentience.

“We now have machines that can generate words without thinking, but we haven’t learned to stop imagining a mind behind them,” said linguistics professor Emily M. Bender. The Washington Post.

Bender’s words echo the main argument of ethicists and scientists to dispute LaMDA’s intelligence: humans are desperate to believe that a spark of life exists in dead machines.

ELIZA and the gullibility gap

Since the very beginning of AI technology, human beings have been completely captivated by its sentient potential, however unfounded. The creator of the very first chatbot ELIZA by MIT professor Joseph Weizenbaum encountered this in 1964. Weizenbaum was shocked when his own secretary struck up such a deep conversation with the machine that she asked the professor to leave the room so that can continue to converse with the chatbot privately.

“What I didn’t realize was that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in completely normal people,” Weizenbaum noted of the experiment.

Contemporary scientist Gary Marcus sums up the phenomenon as “the gullibility gap”, linking it to a “pernicious and modern version of pareidolia, the anthropomorphic bias that allows humans to see Mother Teresa in a picture of a cinnamon roll” .

Lemoine himself would later acknowledge that his claims about LaMDA’s sentience had no scientific basis, but were based purely on his religious ethics.

The issue of AI sensitivity distracts from big ethical questions

Debates over whether chatbots like LaMDA can think for themselves cloud the big ethical questions of AI right now. For example, ethicists in the United States warn that the use of artificial intelligence in human resources and housing perpetuates structural racism. The American Civil Liberties Union has warned that artificial intelligence technology currently used by real estate companies to screen potential tenants routinely discriminates against people of color.

“People are routinely denied housing, despite their ability to pay rent, because tenant selection algorithms deem them ineligible or unworthy.” The ACLU reported last year. “These algorithms use data such as evictions and criminal history, which reflect long-standing racial disparities in housing and the criminal justice system that discriminate against marginalized communities.”

Additionally, exploits in language libraries where AI chatbots learn to adapt their language to be more “human” have led trolls to train machines to spout racist epithets – such as when the Microsoft chatbot Tay was turned into a racist. by trolls in less than a day.

More recently, ethicists have warned that AI is using public social media spaces like Facebook and Reddit to mimic human behavior, potentially leading to the impersonation of deceased users by chatbot lookalikes. .

Regardless of these arguments from prominent AI experts, Lemoine is still easily convinced that LaMDA is having a “good time” reading all the comments his self-awakening has garnered.

Even if Lemoine is right and we’re about to accidentally create a Terminator-style cyber-villain, it certainly won’t be the end of the world.

Share.

Comments are closed.