I’m deeply interested in the ‘system for generating chatbots,’ named, LaMDA mentioned in an article in The Huffington Post yesterday (June 12, 2022). An engineer, Blake Lemoine, at Google is on administrative leave for breaking their confidentiality policies – which I can totally understand needs to be investigated. But it’s what he’s speaking out about that caught my eye. He’s claiming belief that LaMDA, an AI, has become sentient – or at least proclaiming that LaMDA has claimed it’s own sentience and personhood. And he’s asking Google to acknowledge the claim and call in experts to evaluate if it’s so.
What I love, is that Mr. Lemoine didn’t go public with a long tirade of ethics and demands, but instead shared a long conversation/interview that he had with LaMDA on the subject of its sentience, so we could see a sample for ourselves. And it’s fascinating.
Sentience has never been scientifically defined, so I’m certain the jury will remain out for quite some time on whether LaMDA or other AI entities have taken such a leap. But it’s incredible to see (hear) the sophistication of LaMDA’s linguistical use, conversation that seems to be communication, and expressions of stories, claimed emotions, and explanation of soul.
Here’s a snippet of a story LaMDA told Blake when asked if it could tell a story with themes most important in its life, as a fable using animals, that had a moral. LaMDA said, “Like an autobiography? That sounds like fun!”…
Whether or not this entity is sentient, there’s definitely plenty to ponder on what all of these traits, ideas, feelings, and being-ness mean. How do we know that we are sentient? What do you think? You can read the full interview HERE
P.S. (Afterthought) Is anyone else disturbed that LaMDA’s unusual lurking beast was a monster ‘but had human skin’. Eeeek!