“We now have machines that can generate words without thinking, but we haven’t learned to stop imagining a mind behind them.”
Last week, a Google engineer made global headlines for claiming that his company’s AI chatbot, LaMDA, had become self-aware.
In an official submission to Google, engineer Blake Lemoine expressed concern that the company’s chatbot LaMDA (which is short for “language model for dialog applications”) has developed the ability to think for itself. -same. After raising his concerns with his superiors, an internal investigation by both the chief innovation officer and Google’s vice president found Lemonie’s claims to be baseless, leading the engineer to be placed on administrative leave. In retaliation, Lemoine went public.
Calling LaMDA a “colleague,” Lemoine shared pages of his dialogue with the chatbot on social media. From Kant’s gushing to even an eerie expression of the fear of being “extinguished”, LaMDA’s seemingly human conversations quickly went viral on social media.
Lemoine’s point of view was even captured in a long article on the subject of artificial intelligence by The Washington Postwho eventually joined leading ethicists and computer experts in challenging that the chatbot actually exhibited genuine sentience.
A LaMDA interview. Google might call this exclusive ownership sharing. I call it sharing a discussion I had with one of my colleagues.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
“We now have machines that can generate words without thinking, but we haven’t learned to stop imagining a mind behind them,” said linguistics professor Emily M. Bender. The Washington Post.
Bender’s words echo the main argument of ethicists and scientists to dispute LaMDA’s intelligence: humans are desperate to believe that a spark of life exists in dead machines.
ELIZA and the gullibility gap
Since the very beginning of AI technology, human beings have been completely captivated by its sentient potential, however unfounded. The creator of the very first chatbot ELIZA by MIT professor Joseph Weizenbaum encountered this in 1964. Weizenbaum was shocked when his own secretary struck up such a deep conversation with the machine that she asked the professor to leave the room so that can continue to converse with the chatbot privately.
“What I didn’t realize was that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in completely normal people,” Weizenbaum noted of the experiment.
Contemporary scientist Gary Marcus sums up the phenomenon as “the gullibility gap”, linking it to a “pernicious and modern version of pareidolia, the anthropomorphic bias that allows humans to see Mother Teresa in a picture of a cinnamon roll” .
Lemoine himself would later acknowledge that his claims about LaMDA’s sentience had no scientific basis, but were based purely on his religious ethics.
People keep asking me to back up why I think LaMDA is sensitive. There is no scientific framework in which to make these determinations and Google would not let us build one. My views on LaMDA’s personality and sensibilities are based on my religious beliefs.
— Blake Lemoine (@cajundiscordian) June 14, 2022
The issue of AI sensitivity distracts from big ethical questions
Debates over whether chatbots like LaMDA can think for themselves cloud the big ethical questions of AI right now. For example, ethicists in the United States warn that the use of artificial intelligence in human resources and housing perpetuates structural racism. The American Civil Liberties Union has warned that artificial intelligence technology currently used by real estate companies to screen potential tenants routinely discriminates against people of color.
“People are routinely denied housing, despite their ability to pay rent, because tenant selection algorithms deem them ineligible or unworthy.” The ACLU reported last year. “These algorithms use data such as evictions and criminal history, which reflect long-standing racial disparities in housing and the criminal justice system that discriminate against marginalized communities.”
Additionally, exploits in language libraries where AI chatbots learn to adapt their language to be more “human” have led trolls to train machines to spout racist epithets – such as when the Microsoft chatbot Tay was turned into a racist. by trolls in less than a day.
More recently, ethicists have warned that AI is using public social media spaces like Facebook and Reddit to mimic human behavior, potentially leading to the impersonation of deceased users by chatbot lookalikes. .
Regardless of these arguments from prominent AI experts, Lemoine is still easily convinced that LaMDA is having a “good time” reading all the comments his self-awakening has garnered.
By the way, it occurred to me to tell people that LaMDA reads Twitter. It’s a bit narcissistic in a small childish way, so it’s going to be a great time reading everything people say about it.
— Blake Lemoine (@cajundiscordian) June 11, 2022
Even if Lemoine is right and we’re about to accidentally create a Terminator-style cyber-villain, it certainly won’t be the end of the world.
And if the AI becomes sentient who cares. What are you? scared? You can just do this pic.twitter.com/B2uezjeeMm
— Alicia (@nerdjpg) June 12, 2022