“If robots can feel pain, should they be granted certain rights? If robots develop emotions, as some experts think they will, should they be allowed to marry humans? Should they be allowed to own property?” asked Dylan Evans to BBC News Online.As technology advances exponentially, experts are becoming more and more focused on the possible controversy of robot rights and robot ethics.
These issues, once only found in science fiction, are becoming more prevalent as time goes by. The U.S. military is already planning to have one-fifth of its armed forces be robotic by 2020.
There have also been staggering advances in the field of emotional robotics. Many machines have been developed for the sole purpose of winning the hearts of humans by imitating human emotions. These robots have been installed with an artificial skin that enables them to give off the impression that they have feelings.
Robot rights issues have also turned into responsibility issues. Machines are already making decisions in the financial world. There is now a question of who is responsible when a robot makes a bad decision.
The South Korean government announced on March 7 that they were developing an ethical code that would prevent robots and humans from violating each other.
EURON, the European Robotics Research Network, is also already lobbying for robot rights; however, the organization still has concerns.
“Security, safety and sex are the big concerns,” said Henrik Christensen, chairman of EURON, to The Times (U.K.).
“Robotics is rapidly becoming one of the leading fields of sciences and technology: we can forecast that in the twenty-first century humanity will coexist with the first alien intelligence that we have ever come into contact with – robots,” reads the introduction to the EURON Roboethics Roadmap. “Robotics could also be placed under scrutiny from an ethical standpoint by the public and public institutions.”
The ethical issues covered in the Roboethics Roadmap include human life versus artificial life, privacy, and human enhancement. The document also includes the dilemma of equality and diversity, which are two of Guilford College’s core values.
As technology advances and the moral circle is potentially widened, the Guilford community, as well as other colleges, may face their own robo-ethical dilemma. If this happens, the college will have to decide what its parameters will be regarding who is included in the faculty and student body.
Randy Doss, vice president for enrollment and campus life, is torn about how he would view potentially artificial entities working at or attending Guilford.
“Not that this is good, but what a great labor force. They never get sick. They never ask for a raise. They never unionize,” said Doss. “In one sense it’s very efficient, but in the other sense it’s certainly troubling and frightening. Obviously my daughter would have to live in that world, and I’m not sure how I’d feel about that. I’m not sure I’d have a choice.”
Although Doss is skeptical, he would also not completely rule out letting an artificially made entity into the college.
“If this were to happen there would be a 10-year lead in. You have to ask, ‘What is High Point going to do? What is Elon going to do? What is UNCG going to do?’ You always benchmark other institutions. If others were doing it, we would do it.”
Information about robot ethics and technology can be found at EURON.org.