Whether we’re making machines more human-like and sentient because it is directly familiar to us or building machines using brute force with a perceived sense of weaponry for self-preservation, we are setting out to create an artificial successor to humankind.
As humans, we are flawed and are morally imperfect – and this exists in the building of intelligent machines. Among anything humans have ever invented, intelligent machines probably share the most in common with their maker. Therefore, it should be unsurprising in that we view our creations with a certain ambivalence.
We have thus far approached building intelligent machines with a degree of arrogance – that a human being (with a conscious mind) can be objectified to an AI device. As artificial intelligence (AI) evolves, it is only natural to want to reverse-engineer the human brain into machines so to exhibit human-like behaviours because it is most familiar to us and where we’re most comfortable. It comes as no surprise that we are now looking to develop “thinking machines” or best known as AGIs (artificial general intelligence) which mimics exactly the neural processing in the human brain.
We have demonstrated time and time again our reluctance to accept the possibility of an intellect far greater than our meagre ability. In the words of Thomas Nagel, “if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task”. What if these “thinking machines” could teach itself to be better (than us) through advanced self-replicating systems? We are constrained by the limits of our own human brain to be able to identify this objectively and recognise – that one day these machines may be aware of and responsive to their environment as well as its own state.
What happens as AI exceeds our evolved capability and everything else we thus far have considered as the providence of humanity? We are blurring the distinction between persons and objects. As we now objectify ourselves to an AI, is it when we cross that threshold then we ascribe these machines as people?