Artificial intelligence (AI) was once the stuff of science fiction. But it is becoming more widespread. It’s used in mobile phone technology and motor vehicles. Boost tools for farming and health care.
But concerns have been raised about the liability of AI and related technologies like machine learning. In December 2020, a computer scientist, Timnit Gebru, He was fired from Google’s Ethical AI team. He had previously raised the alarm about the social effects of bias in artificial intelligence technologies. For example, in a Paper 2018 Gebru and another researcher, Joy Buolamwini, had shown how facial recognition software was less accurate in identifying women and people of color than white men. Biases in training data can have far-reaching and unintended effects.
Black Friday – Cyber Monday
Sign up for our annual subscription and get full access to our award-winning articles, market indicators, and data tools.
R630 R530 *
SAVE R226 vs monthly subscription.
* Valid until November 29, 2021. Terms and conditions apply.
There is already a substantial body of research on ethics in AI. This highlights the importance of principles to ensure that technologies do not just worsen prejudices or even introduce new societal harms. As the UNESCO Draft Recommendation on the Ethics of AI state:
We need national and international policies and regulatory frameworks to ensure that these emerging technologies benefit humanity as a whole.
This is certainly a step in the right direction. But it is also essential for look beyond technical solutions when addressing bias or inclusion problems. Biases can come in at the level of who frames goals and balances priorities.
in a recent articleWe argue that inclusion and diversity must also be at the level of identifying values and defining frameworks of what counts as ethical AI in the first place. This is especially pertinent when considering the growth of AI and machine learning research across the African continent.
Research and development of artificial intelligence and machine learning technologies are growing in African countries. Programs like Data Science Africa, Data Science Nigeria, and the Indaba deep learning with his IndabaX satellite events, which have so far been held in 27 different African countries, illustrate human interest and investment in the fields.
The potential of AI and related technologies to promote opportunities for growth, development and democratization in Africa is a key driver of this research.
However, so far very few African voices have been involved in international ethical frameworks that aim to guide research. This might not be a problem if the principles and values of those frameworks are universally applicable. But it is not clear that they do.
For example, him European framework AI4People offers a synthesis of six other ethical frameworks. It identifies respect for autonomy as one of its key principles. This principle has been criticized within the applied ethical field of bioethics. Is seen as failing to do justice to community values common in Africa. These focus less on the individual and more on the community, even requiring that exceptions they are made to uphold such a principle to allow for effective interventions.
Challenges like these, or even the recognition that there could be such challenges, are largely absent from discussions and frameworks for ethical AI.
In the same way that training data can reinforce existing inequalities and injustices, it does not recognize the possibility of diverse sets of values that can vary in social, cultural and political contexts.
Furthermore, disregarding social, cultural, and political contexts can mean that even a seemingly perfect The ethical technical solution can be ineffective or wrong once implemented..
For machine learning to be effective in making useful predictions, any learning system needs access to training data. This involves samples of the data of interest: inputs in the form of multiple features or measures, and outputs that are the labels that scientists want to predict. In most cases, both these features and the labels require a human understanding of the problem. But not properly accounting for the local context could result in underperforming systems.
For example, mobile phone call logs have Has been used estimate the size of the population before and after disasters. However, vulnerable populations are less likely to have access to mobile devices. So this kind of approach could produce results that are not useful.
Similarly, computer vision technologies for identifying different types of structures in an area are likely to underperform when using different building materials. In both cases, as we and other colleagues discussed in another recent articleDisregarding regional differences can have profound effects on anything from disaster relief delivery to the performance of autonomous systems.
AI technologies should not simply worsen or incorporate the problematic aspects of today’s human societies.
Being sensitive and inclusive in different contexts is vital to designing effective technical solutions. It is equally important not to assume that values are universal. Those developing AI need to start to include people from different backgrounds: not only in the technical aspects of designing data sets and the like, but also in defining the values that can be drawn on to frame and set goals and priorities.
Mary Carman, Professor of Philosophy at the University of the Witwatersrand and Benjamin Rosman, Associate Professor in the Faculty of Computer Science and Applied Mathematics at the University of the Witwatersrand.