China is playing catch-up in the development of ethical guidelines in the field of artificial intelligence, with the establishment of an ethics committee.
Chen Xiaoping – inventor of Jia Jia, the realistic humanoid “Robot Goddess”, and KeJia, an intelligent home service robot – is leading the committee, which held its first conference last year and is due to meet again in May.
Chen, professor and director of the Robotics Laboratory at the University of Science and Technology of China, said AI in China had developed to a point where ethical guidelines were now necessary to address potential risks in large-scale applications.
“If the technology was far off being applied there would be no need to talk about ethics research, but there is value in this research into technologies that might be applied on a large scale in the next 10 or 20 years,” he said.
Chen was appointed to establish the ethics committee by the Chinese Association for Artificial Intelligence, the country’s only state-level AI body.
The complexity of the subject means the committee’s discussions include experts from AI research and industry, as well as sociology and the law.
“Furthermore, we need to discuss what risks these technologies might bring, as well as what preventive measures we can take,” Chen said.
The committee was looking into sectors such as data privacy, AI in medicine, self-driving vehicles, and – of particular urgency, Chen said – AI in senior care.
Privacy is another area of high public concern in China, where AI and facial recognition technology are already deployed at subways, pedestrian crossings and some supermarkets. The technology has even helped police catch criminals on the run at concerts.
AI ethics have long been discussed in other countries. In Europe, for example, the European Union’s High-Level Expert Group on AI released the first draft of its ethics guidelines last December.
The EU draft discusses the concept of “human-centric AI”, which “strives to ensure that human values are always the primary consideration”, as well as “trustworthy AI”, meaning that “its development, deployment and use should comply with fundamental rights and applicable regulation, as well as respecting core principles and values, and it should be technically robust and reliable”.
The draft is open for comment until Jan 18 and the expert group will present its final guidelines to the European Commission in March.
Chen said that in China there was not yet much emphasis on AI ethics, but he was hopeful that, as more discussions took place, he could rouse people’s attention in academia and industry.
He pointed to the surge in interest in ethics for biomedical engineering at the end of last year, prompted by news of the world’s first gene-edited twins.
Since Shenzhen-based scientist He Jiankui stunned the world with his announcement of the controversial breakthrough, Shenzhen authorities have tightened the ethical review process for biomedical research involving humans with its own set of local regulations.
“But AI is different from gene-editing in the way that the risks of AI are mostly in the applications, not in the technology itself,” Chen said.
“We can take preventive measures to avoid going in a certain direction or take measures to control the risks.” – South China Morning Post
Source: thestar.com
Author: Phoebe Zhang