Expanding the Horizons of Human-Centered AI

A recent entry on the ACT blog by Alexandra Bahary-Dionne (M.A. in Law, UQAM) - ACT student - as part of her participation and that of Karine Gentelet (professor at UQO) - ACT researcher - at the summer school on the societal impacts of artificial intelligence organized by HCI Across Borders is now available.

The summer school, titled "Expanding the Horizons of Human-Centered AI (HCAI)", was held at the India Habitat Centre in New Delhi, India. The objective was to explore different perspectives on the development of HCAI, as well as the opportunities and challenges they entail. A conversation initiated through 16 presentations on the themes of AI design and systems, human-centered machine learning, the relationship between AI and the common good and critical perspectives on AI. Above all, it must be remembered that it was supported through a disciplinary, institutional and geographical plurality. These are therefore the reflections and initiatives of several entities, including universities in India (Naveen Bagalkot, Janaki Srinivasan and Aaditeshwar Seth) and the United States (Rosa Arriaga, Munmun De Choudhury and Tapan Parikh), the private sector (Microsoft Research with ethnographer Jacki O'Neill, Kalika Bali, Amit Sharma and Mohit Jain), research institutes (Jerome White of Wadhwani AI and Urvashi Aneja of Tandem Research), as well as community organizations (Haiyya, Love Matters India and MakerGhat). Participants from India, Bangladesh, Nepal, the United States and Canada were also present.

What is HCAI?

First, it is an AI "by and for" humans and, in doing so, an AI focused on their aspirations, needs - and fears - as a community. While the discussions we had revealed that HCAI suggests a contextual approach to the development of AI rather than a formal theoretical requirement, such an approach is based on the premise that a technology created by humans and affecting them must also be governed by humans, including by human consent.

This approach, and the large number of people who create the technologies represented in this event, have, in my opinion, several implications for legal research in the field of AI. In the legal field, as elsewhere, we often tend to perceive artificial intelligence in terms of its effects or impacts. On a more conceptual note, a look at technological innovation provides us with the tools to study a very particular moment in the AI development, the design stage, which involves technical, social, economic and political choices. It thus highlights the social dimension of innovation through the interaction between technology and social issues in the design process of these tools. It can therefore be said that the HCAI is a very appropriate approach to get out of technical determinism. If it draws from a sociological perspective, this approach then makes it possible to move away from a deterministic vision where technology determines social, and innovation is an external force that determines social - while recognizing that technologies are not neutral and that they help to structure practices. On a more pragmatic note, what could be better than to explore the approach and practices of designers when they try to design technologies that use AI to meet social needs?

For example, it is in a relatively provocative way that Jaron Lanier puts the question of the impacts of AI into perspective: does AI exist? According to him, there are only data and designers - and, let us add, social structures that form the context for the design and use of AI. Together, they create a form of mystifying creature called AI. The HCAI is then perhaps the opportunity, or even tangible evidence, to see that technology is a human and social construct since many of these aspirations, needs and fears related to AI vary according to geographical and social location. Both the motivations behind the design of technical systems and the concerns raised by their implementation highlight different conceptions of the common good, but also of human rights. An important use that emerges from the conferences is that of chatbots to meet various socio-economic needs, for example to help farmers maximize their harvest or to provide sexual health information throughout India (HAIYA and Love Matters India). These lessons are just as relevant at the stage of appropriation of technologies by users: their use is part of a context of specific practices. As Naveen Barangkot points out, HCAI draws on Human-Centered Design, which studies human interactions in their context in order to design computer systems. This is what ethnomethodologists do in the field of AI, which seeks to explore how technology can integrate into pre-existing human practices by observing these practices, or chatbots designers who use real human conversations to feed their algorithm.

Of which human being are we talking about ?

This is the question that is most obvious when we talk about HCAI: who is this human being at the heart of the HCAI? A rapidly emerging issue is the representation of people in technical systems, particularly in the data used; for example, the lack of geodiversity in Google Images photos. But the fact that technologies are both decisive and determined human constructions also implies paying particular attention to the relationship between the people who design technologies and those who use them. One of the issues at the heart of the HCAI is this relationship - and sometimes the social distance - between the people who design technologies and those who use them. On this matter, many designers are self-critical of the fact that they often create from a problem to be solved. However, who identifies this problem, how and from which position? We then criticize for often creating for ourselves rather than for the target audience.

To make it more challenging, and since it is a question of humanity on a collective scale, we can also ask ourselves of which community we are talking about. In this context, the HCAI is a model that should help to break out of the Western discourses on AI. Lucy Schuman  explains that it is necessary to return to the social and cultural imaginaries in the background of the design :

In the case of the human, the prevailing figuration in Euro-American imaginaries is one of autonomous, rational agency, and projects of artificial intelligence that reiterate that culturally specific imaginaries. At stake, then, is the question of what other possible conceptions of humanness there might be, and how those might challenge current regimes of research and development in the sciences of the artificial, in which specifically located individual conceive technologies made in their own image, while figuring out the latter.

In this context, Unyashi Aneja and Janaki Srinivasan point out that AI is a socio-technical system and that its data reflect pre-existing biases, cultural assumptions and sometimes invisible power relations that are transmitted in classification choices. Also, technologies designed without regard to their local context may fail to take into consideration local resources as well as social and cultural norms. However, by making decisions based on automation and predictions, a form of monopoly on knowledge is created based on these implicit assumptions. The challenges of AI are therefore not only economic, social and political, but also epistemic.

Similarly, HCAI should not limit itself to the relationship between people who design and those who use it: technologies may have different effects from the intentions behind their creation. For example, technologies for some people may have negative and indirect externalities on others for technical, social, economic, cultural and political reasons. In doing so, conceptualizing an AI that focuses on a group of people is not just about thinking about these people. For example, HCAI research is dedicated to exploring jobs that are and will be affected by AI, but also those created through AI, sometimes under harmful conditions, such as "click work", and other activities in the gig economy.

In short, a holistic conception of the humans we are talking about finally implies understanding the question of the environment in which these humans live. In practice, this means collecting data at several scales when developing a technical system. Rosa Arriga conceptualizes this approach, for example, by drawing inspiration from the ecological system theory. In order to implement technology to serve a particular group of people, it should take into account not only that group, but also the community (school, neighbours, friends), the environment (social, cultural, material and virtual) and society in the broadest sense (political, legal and economic structures). The goal of this theory is to develop technologies at the societal level while taking into account the local context. Obviously, the lack of representation of certain groups in the data implies challenges inherent in this approach. It also raises the question of what data should not be collected, or what kind of problem AI cannot help to solve. Finally, and more vividly, HCAI-centered approaches also involve asking why AI should be centered on the human, rather than other entities.

Beyond the data: citizen representation at the heart of innovation

While it is possible to think about including people in data, some presentations led to think about representation on another level, suggesting a reflective perspective on innovation: how to be inclusive not only in the data, but also in the design of technologies? For organizations such as Gram Vaani (presented by Aaditeshwar Seth), an ethical approach to innovation requires participatory governance in the design of technologies through community networks. (Its slogan? Community-powered technology.) Participation can take place at several levels: understanding people's preferences (user-centered design), designing the tool with them (participatory framework) or managing the project with them (in the form of action research). The Mobile Vaani application, for example, is based on the precept that one way to ensure transparency and accountability of government and companies must come from within, namely through a more democratic system of governance in the design of technologies and policies about these technologies. Participatory governance models would then be a way to mitigate the concentration of power, but also to bring about social change from the inside. In addition to training computer science students in ethics and reflexivity in system design, such initiatives suggest that representation of workers, users is necessary in company decision-making in the AI field. To go further, we can also think of initiatives "by and for" where designers and users are one and the same.

Finally, community HCAI initiatives highlight the invaluable contribution of activist groups and citizen initiatives in the development of AI by and for those who need it and who help to structure technical systems iteratively based on their own experiences. In the end, as we have done with socio-legal research, it may become essential to think about AI not only by studying its impacts on society, and therefore of an AI that would be imposed on society, but also to explore AI in society, from the conception to the appropriation of these technologies.

This content has been updated on 18 December 2019 at 21 h 37 min.

Comments

Comment