Marcus du Sautoy of Oxford University says of robots that there might be a “threshold moment” where consciousness “emerges”. Consciousness being, it seems, a state where they realize they are robots and not things and that they have a place in the world. Should that happen, he says, they would need a framework to be constructed by humans in which robots could find protection (presumably against humans) as intelligent creatures. They would have rights and those rights would need to be protected and enforced.
How would it be possible to measure that this “threshold moment” had been reached and that robots had become sentient, intelligent beings? Professor du Sautoy (he is actually Simonyi Professor for the Public Understanding of Science as well as a Professor of Mathematics and a Fellow of the Royal Society, so clearly not someone simply to be ignored) says that we are in a golden age, consciousness-wise. He says we have a telescope into the brain which has given us the chance to see things previously invisible.
Being a professor of mathematics is a good starting point for studies of artificial intelligence because Professor du Sautoy maintains that being good at mathematics is less a matter of mastering times tables than of recognizing patterns – which is what many AI programs (like those used to detect criminal activity) actually do. He says, for example, that in humankind’s earliest days, creatures able to detect patterns had a head start in the battle to survive and evolve. Symmetry, for example: the ability to recognize it would enable the recognizer to detect an animal and to know that either the person could eat the animal or the animal might eat the person.
Did Professor du Sautoy choose this time to promote the idea that robots might develop consciousness because he has a book to promote? Only an utter cynic would even suggest such a thing.