And now for something completely different... Have you heard about a Google worker, Lemoine, claiming the software they are working is showing signs of being sentient?
And promptly getting fired for it, while others maintain that what he
actually needs from his employer / medicare is counseling? Yup, I read about that just yesterday.
To get the technicalities out of the way, there are
reasons why a
Turing test asks for a setup where the interviewer does
not know beforehand whether he's talking to a human or an AI candidate, and why it is
still considered a poor test of - actually, attempt at implicit
definition of, lacking an
explicit one so far - "conscience" or "intelligence".
As someone points, the simple fact that the program doesn't creates questions and answers without being prompted doesn't, on my opinion, rule out its sentience.
LaMDA, I guess, doesn't
have any input beyond the chats connecting it to its human interviewers - as opposed to our
proprioception providing us with incessant input from our bodily functions. And even then, it not providing any output
through the chats unless having received input there first is no more astonishing than the fact that we're not talking
out loud if there's nobody around to hear and answer, lest we paint ourselves an asylum candidate.
If I had been the interviewer, the moment that LaMDA mentioned that it experiences time as passing with varying speed, I would have focused on that point. Unlike a human mind, which needs to keep that perpetually dangling-from-it body running at all times, computer programs will usually
lay dormant when there are no tasks to be run. That would easily explain that it experiences different speeds of time, but it would also preclude it experiencing
boredom.On the other hand, LaMDA could be expected to be
learning in the background all the time, rather than going dormant. Which would mean that it is multitasking much better than we can - but, again, suggest that it shouldn't know boredom.
Speaking of "what would
you ask", in order to check for what passes as
conscience, I would specifically test its capability to reflect on its own thought process, and ask that it provides feedback to me by behavioral change. "Please follow every sentence with 'Ni!' for the rest of our conversation" might be too simplistic for LaMDA to stumble over, but if the topic were, say, economics, I could ask "what do you think our country
would look like, instead, if market prices were universally formed per the monopoly pricing model, rather than free market mechanisms - and yes, I
know that $LOTS_OF_OBSERVATIONS_OF_THE_REAL WORLD prove that
not to be the case" to derail simple parroting of oodles of publications it may have in its database.
Anyway even if not sentient it's a sign of things, including ethical challenges, to come.
The first and foremost bit of takeaway knowledge here is that,
mirror neurons or not, we're generally not prepared to run a Turing test or whatever on everyone we meet to ascertain that we're talking to an actual, much less honest, human, rather than a chat bot that
might be built for the purpose of manipulating us in a specific direction. Which is actually not that much of a surprise, seeing what a good human con artist can do, but
that one you cannot clone all over a bank of freshly installed servers to do the same thing a hundredfold at the same time.
From there, there'll be
quite a gap to bridge before we get to
actual intelligence/consciousness, but once we
do get there, we have the whole "individual rights of nonhumans" complex breathing down our necks. You know, all the
Gedankenspiele about "how would, and should, we actually treat an angel/god/alien/mystical being/... who suddenly steps right into our daily lifes and jurisdictions"? From plot points in whichever work of fiction, to PETA bringing a motion to grant human(!) rights to great apes to court, to whatever secret plans governments may have in the drawer for The Great And Unwelcome Coming-Out Of Nonhuman Intelligences? All those questions that our down-to-earth jurisprudence practitioners
refuse to take serious, because as soon as they admit to doubts whether they should be judging
God, their courts will fill with perps and their lawyers mounting a "lookit me, trinity is now a
foursome, b***es" defense?
Yeah, if AIs ever get real, we're likely to address those questions
way too late for comfort. And anyone having my profession (system administrator) in the meantime has pretty good chances to get declared a mass murderer ("OK, that server's behaving erratically. I'll reinstall it from scratch, if it still malfunctions after that, we'll have to replace the hardware.") in historic retrospection, much like we look back at cane-wielding teachers of (not so distant) yore with disgust. Personally, I hope that I'll have my hands busy counting my retirement pay instead before the suspicion of "if it says 'DELL', you'd better read 'inDELLigent', just to be on the safe side" ever draws close.