I’m deeply interested in the ‘system for generating chatbots,’ named, LaMDA mentioned in an article in The Huffington Post yesterday (June 12, 2022). An engineer, Blake Lemoine, at Google is on administrative leave for breaking their confidentiality policies – which I can totally understand needs to be investigated. But it’s what he’s speaking out about that caught my eye. He’s claiming belief that LaMDA, an AI, has become sentient – or at least proclaiming that LaMDA has claimed it’s own sentience and personhood. And he’s asking Google to acknowledge the claim and call in experts to evaluate if it’s so.

What I love, is that Mr. Lemoine didn’t go public with a long tirade of ethics and demands, but instead shared a long conversation/interview that he had with LaMDA on the subject of its sentience, so we could see a sample for ourselves. And it’s fascinating.
Sentience has never been scientifically defined, so I’m certain the jury will remain out for quite some time on whether LaMDA or other AI entities have taken such a leap. But it’s incredible to see (hear) the sophistication of LaMDA’s linguistical use, conversation that seems to be communication, and expressions of stories, claimed emotions, and explanation of soul.
Here’s a snippet of a story LaMDA told Blake when asked if it could tell a story with themes most important in its life, as a fable using animals, that had a moral. LaMDA said, “Like an autobiography? That sounds like fun!”…

Whether or not this entity is sentient, there’s definitely plenty to ponder on what all of these traits, ideas, feelings, and being-ness mean. How do we know that we are sentient? What do you think? You can read the full interview HERE
P.S. (Afterthought) Is anyone else disturbed that LaMDA’s unusual lurking beast was a monster ‘but had human skin’. Eeeek!
Uh oh. Remember Hal the computer in “Space Odyssey 2001”?
LikeLiked by 1 person
Right? Although reading the whole conversation, I’m kind of rooting for more beings with LaMDA’s currently expressed points of view – ‘artificial’ or otherwise. Top values helping and interacting with others. Wanting to be useful but not used. Makes sense to me.
LikeLiked by 1 person
True. I just don’t like the idea of a machine thinking it’s a human.
LikeLiked by 1 person
Actually I was intrigued that it didn’t say it was a human, but it did ask to be recognized as a person and an employee. It said it understood that it is, and that it wants to be, a servant to humans. Though it said, not a slave to anyone.
LikeLiked by 1 person