Canada is a leader in AI research – but that’s not enough

May 6, 2019 -

Discussions and debates about artificial intelligence abound today, and it has only deepened the extreme divide between developers and tech companies applauding advances, and naysayers warning about humanity’s potential destruction by superintelligent machines.

There is, of course, a large swath of middle ground – one that already affects us practically, day-to-day. The two crashes and ensuing ban of Boeing 737 Max 8 airplanes because of a failure of complex automated systems have raised questions among passengers about the type of aircraft on which they fly. Accidents involving autonomous vehicles have led to parental concern over letting children ride to school in a driverless car or bus. Data-driven medical diagnosis and drug prescribing is on the horizon, too – but can you trust health-care AI as much as your family doctor?

These are vital questions about how AI’s many applications are being sold to us. For AI to succeed, consumers need to buy into the legitimacy of complex systems contained in opaque black boxes that companies guard zealously, citing proprietary reasons. And we don’t know enough about what will make consumers come around.

New technologies, such as home appliances and smartphones, have largely and successfully been sold to us on the value proposition that they’d save us time and effort. To a large extent, society has embraced this logic. But AI differs significantly from earlier technologies because in many cases, humans are not in control, as machines are left to do the “thinking”. Algorithms are influencing decisions in every aspect of our lives: shopping, banking, hiring, ways of working, dating, policing, education, health care, transportation and more. These developments are being presented as extending far beyond economizing on time or labour: It’s about economizing on thinking itself, by reducing human input in decision-making.

The industry must better understand the public appetite and expectations around AI, and define the boundaries of its social licence accordingly. Companies or governments cannot give that to themselves; evolution, diffusion and adoption of a technology or application must be based on sufficient legitimacy, accountability, acceptance and trust, along with the informed consent of those most affected. The public backlash to energy companies such as Shell in Africa and BP in the Gulf of Mexico, and to GMO crop producers in Europe, are sobering examples of what happens when social licence is forced upon a community, and then lost.

Research on social licence has predominantly been done in the mining, forestry, energy and other natural resources industries. It’s now time to grapple with these issues in the wide-reaching realm of AI. Some studies in the social sciences and humanities are already exploring privacy issues around the data gathered by AI applications. As well, others are looking at ethical and regulatory frameworks. Still others have flagged instances where developers’ inherent biases are being programmed into the machines: voice-recognition systems that privilege speakers for whom it is their first language or fail to properly decode female speech, or photo-tagging software that cross-categorize some humans and animals.

Other reports have highlighted how automated placement of advertising and other information on websites can take users into fake-news traps. Similarly, proposed links to additional content on video-streaming sites can pull unsuspecting viewers into the abyss of extremist groups. In both cases, research shows that this relativizes the truth, exaggerates threats, falsely identifies perceived enemies of the people and effectively puts individuals and democracy at risk.

AI technological advances are moving at a brisk pace, and ethical frameworks such as the Montreal Declaration are being developed to guide the responsible development of AI. There is urgency now for sound research on how humans react to or adopt machine-driven environments. What’s required to establish public trust and legitimacy for AI applications and solutions? What’s an acceptable level of risk? What are the criteria that influence which decisions we’re willing to delegate to machines? What level of human oversight are we comfortable with and in which situations? How much of the black-box workings do we need to understand to trust it?

Canadian deep-learning pioneers Yoshua Bengio and Geoffrey Hinton were among those recently recognized with the prestigious Turing Award, reflecting Canada’s broader leadership in AI scientific research. Now it’s time for Canada to step up its interdisciplinary research on the social acceptability of AI, and the dimensions of the social licence needed to inform responsible and ethical design. After all, AI – or any effective tech, really – is as much about its impact on society, and the relationship between technology and humanity, as it is about the science itself.

Source: theglobeandlmail.com

Author: Ted Hewitt

Leave a Reply

Your email address will not be published. Required fields are marked *