IGF 2021 WS #184 Syncing AI, Human Rights, & the SDGs: The Impossible Dream?

Time
Wednesday, 8th December, 2021 (15:50 UTC) - Wednesday, 8th December, 2021 (17:20 UTC)
Room
Conference Room 1+2

Organizer 1: Marianne Franklin, Internet Rights and Principles Coalition/Goldsmiths University of London
Organizer 2: Dynamic Coalition Internet Rights and Principles Coalition, Internet Rights and Principles Coalition
Organizer 3: Minda Moreira, Internet Rights and Principles Coalition (IRPC)

Speaker 1: Renata Avila, Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 2: Anita Gurumurthy, Civil Society, Asia-Pacific Group
Speaker 3: Raashi Saxena, Civil Society, Asia-Pacific Group
Speaker 4: Thomas Schneider, Government, Western European and Others Group (WEOG)
Speaker 5: Paul Nemitz, Intergovernmental Organization, Western European and Others Group (WEOG)

Additional Speakers

Michelle Thorne, Mozilla (Technical Community)

Parminder Jeet Singh, ITforChange (Civil Society)

Moderator

Marianne Franklin, Civil Society, Western European and Others Group (WEOG)

Online Moderator

Minda Moreira, Civil Society, Western European and Others Group (WEOG)

Rapporteur

Dynamic Coalition Internet Rights and Principles Coalition, Civil Society, Western European and Others Group (WEOG)

Format

Debate - Classroom - 90 Min

Policy Question(s)

Economic and social inclusion and sustainable development: What is the relationship between digital policy and development and the established international frameworks for social and economic inclusion set out in the Sustainable Development Goals and the Universal Declaration of Human Rights, and in treaties such as the International Covenant on Economic, Social and Cultural Rights, the Conventions on the Elimination of Discrimination against Women, on the Rights of the Child, and on the Rights of Persons with Disabilities? How do policy makers and other stakeholders effectively connect these global instruments and interpretations to national contexts?
Promoting equitable development and preventing harm: How can we make use of digital technologies to promote more equitable and peaceful societies that are inclusive, resilient and sustainable? How can we make sure that digital technologies are not developed and used for harmful purposes? What values and norms should guide the development and use of technologies to enable this?

This session addresses the rise of AI as an increasingly popular means to achieve the sustainable development goals, in "efficient" and "measurable" ways. The technological advances in digital networking ushered in during the Covid19 pandemic include AI applications and designs.

Audience Poll: https://www.menti.com/igcrztzbxh : The code is 1410 2732

The policy questions guiding this session signal a challenge for all stakeholders working at the intersection of these three broad areas of internet governance: namely whether current AI trajectories, existing human rights standards, and global compacts aiming to address environmental degradation at the planetary level are, fundamentally, incompatible pathways.

  • Article 4 of the Charter of Human Rights and Principles for the Internet addresses this relationship only in broad terms.
  • In recent years intergovernmental organizations and national bodies have been codifying both ethics frameworks for AI R&D and regulatory instruments that consider AI and Human Rights standards in tandem.
  • Technical community and businesses not only express concern about the impact of regulatory initiatives on innovation but they have also been exploring sustainable, human-rights respecting AI applications.
  • Meanwhile communities, in the Global South and around the world, have been mobilizing around the environmental impacts of AI roll-out that impinge on fundamental rights and freedoms of communities as well as individuals.

This session draws together the outcomes from sessions that the IRPC (co-) organized for the 2019 and the 2020 IGF meetings:

1) Sustainable Internet Governance & the Right to Development (IGF 2020):

2) Sustainable Internet Governance By Design: Environment & Human Rights (IGF 2020):

3) Internet Futures and the Climate Crisis - Paths to Sustainability or Extinction? (IGF 2019):

4) Data Governance by AI: Putting Human Rights at Risk? (IGF 2019).

A debate framed in this, counter-intuitive form opens up innovative thinking concrete ways in which AI designers working in government and business, and those developing alternative designs from civil society organizations, can effectuate "human-rights based AI by design" that enable equitable and inclusive outcomes for local communities as well as national economic wealth.

SDGs

1.b
4.3
4.7
5.b
7.b
8.2
9.4
11.2
12.2
16.9
17.18
17.6
17.7

Targets: The targets selected here all bespeak emerging uses and investments into AI tools and systems. Each speaker will be invited to focus their intervention on those goals selected above that are most pertinent to their position on the question for debate. The links will be consolidated therefore in the preparatory meetings and discussions before the event itself.

Description:

This workshop session brings into alignment three thematic streams, and their policy implications that tend to follow parallel paths: (1) AI, (2) human rights norms and law, and (3) sustainability. Participants will contribute their responses to the following provocation:

What if current AI trajectories - now indispensable to how internet and other digital technologies work - are actually undermining the sustainable future of human rights and the natural world?

The session explores ways in which these issue areas can be synchronized as feasible and human-enhancing internet governance policy agendas in light of the inroads that AI technologies have made into all layers of the internet; its architecture, terms of access and use, content management, data gathering, storage, and management.

The format for this session is an open debate. Speakers' responses from the basis of their expertise and commitment to synchronizing these three objectives will frame the ensuing public discussion and decision on the outcome of this debate by the audience.

Expected Outcomes

This session is a form of thought experiment that brings together discussions often conducted in silos as various understandings of AI discuss the human and environmental implications of these technologies within their respective sectors.

The issues that arise when priorities and visions for AI, sustainable development and human rights-based internet governance institutions meet around the same table will provide the impetus for further explorations.

These interactions will be ensured by 1) Brevity of all interventions, including invited speakers; 2) Full use of the online video conferencing chat-room, facilitated by the moderating team; 3) inclusion of audience in the room and those online in direct ways; 4) incorporation of social media commentary if available from nominated organization team members.

Online Participation

Usage of IGF Official Tool. Additional Tools proposed: Twitter will be deployed before and during the session: #WS184 #IGF2021 @netrights

A live-polling tool will be deployed during the session to engage the audience from the outset.

Key Takeaways (* deadline 2 hours after session)

An unregulated, opaque and fragmented AI can be harmful to humans and the environment and more needs to be done to ensure that AI is human rights based and environmentally sustainable by design.

More transparency, accountability and clear regulatory frameworks are necessary, as well as dialogue and cooperation among stakeholders. AI must be inclusive, non-discriminatory and rooted on democratic processes, the rule of law and human rights.

Call to Action (* deadline 2 hours after session)

* A more human centric digital transition that is diverse, inclusive, democratic, and sustainable to ensure that AI causes no harm to humanity and the environment.

The primacy of democracy - a global democracy, able to deliver on complex technology, more transparency, reporting and accountability, more collaboration between stakeholders and a general set of rules that can be used for future technology development.

Session Report (* deadline 26 October) - click on the ? symbol for instructions

WS #184 Syncing AI, Human Rights, & the SDGs: The Impossible Dream? brought together three thematic streams that are often discussed in parallel paths: Artificial Intelligence (AI) human rights and environmental sustainability.

The session started with the provocative question: What if current AI trajectories - now indispensable to how the Internet and other digital technologies work - are actually undermining the sustainable future of human rights and the natural world?

Participants were also invited to respond to the Mentimeter question: How would you describe the relationship between AI, Human Rights, and Sustainability?

 

The panel agreed that while AI offers great potential in important areas such as medicine, food production, education, and climate crisis, to name just a few, the harmful aspects of AI on Human rights and the environment needed particular attention. The mass data gathering, processing, use, and storage demands ever-growing energy consumption and AI has been used to speed up fossil fuel extraction with harmful impacts to the environment. On the same token, human rights have been impacted by AI algorithm bias and other discriminatory processes, as well as impacts on the rights of privacy security, and trust. Despite the abundance of principles, there is not a united response; the lack of accountability, transparency, and a global or collective vision on AI adds to what Paul Nemitz called “techno absolutism” which is undermining democracy. Moreover, power struggles between the developed “AI haves” countries and the developing "AI have nots” is paving the way to new forms of colonialism - data colonialism and data warfare, as Raashi Saxena and Parminder Jeet Singh pointed out, all this hindering the efforts to develop AI systems that ensure both human rights and environmental sustainability.

 

Concrete actions to mitigate the harmful impacts of AI were discussed as comments, questions, and suggestions were raised in the room and by online participants. Many of the issues addressed focused on the lack of transparency, accountability, developed public interest infrastructures, the fragmentation of responses by governments and international bodies, the difficulties of citizens in accessing collected data from public entities, that claim the safeguarding of at public interest, the need to look dependencies and power and that the worker’s rights on privacy and security are safeguarded, and the dangers of automation of judicial systems in the safeguarding of the individual rights and freedoms. Renata Avila added the lack of scrutiny of procurement as an example of an area of crucial importance for ensuring that human rights and environmental sustainability are taken into account when acquiring AI technologies. Other speakers agreed on this observation with Michelle Thorne pointing out that they also realised at Mozilla that procurement needs to be taken into account in transparency and sustainability reporting.

Overall there was a general agreement that more needs to be done to promote transparency and accountability, that clear and effective regulation was needed and that companies need to work with policymakers rather than working against them despite some disagreements or divergent views on how to foster and implement policies that work for all. There was a wide consensus on the idea that horizontal rules are necessary and that they could go hand in hand with sectoral rules as Thomas Schneider developed. These horizontal rules expanded Paul Nemitz should be clear and simple rules that a common individual can understand. When asked about the role of Youth in the development of these simple rules Nemitz elaborated that those on the receiving end of technologies who put a serious effort into reading the laws should be able to understand its meaning clearly and should be able to make informed decisions on the acceptance of its use.

While some speakers called for the strengthening of democratic processes, the primacy of democracy and the rule of law over technology (Paul Nemitz, Thomas Schneider) to ensure that democracy can deliver on complex technology in this digital age (participants were quick to remind that not all countries have the luxury of a democratic system) others (Renata Avila, Parminder Jeet Singh) called for the strengthening of multilateralism as the solution. Renata Avila called for the return of multilateralism and the update of foundational principles of United Nations (UN) to reflect the challenges of the technology and to work on a set of building principles for future technologies global and interplanetary level. Parminder Jeet Singh building upon Avila’s suggestion called for a global democracy model around AI. He suggested a global space where people can come together to discuss AI and develop research and that big data is broken so that data collection, cloud computing and consuming AI services are separated, which he pointed out can only be achieved at the UN level.

From the floor as well as in the Zoom chat and looking at concrete examples there were also suggestions on private/public cooperation with examples on a trust seal on products, the need to strengthen the dialogue by introducing some sort of institutionalised regular dialogue as a way to ensure transparency and the right to information or the creation transparency regulation and external audits on algorithms as to ensure that users are informed of the kind os data collected/presented and the companies are made accountable.

Following up on the relationship between AI and environmental technologies, Michelle Thorne pointed out the fact that 90% of green gas emissions at Mozilla comes from the use of digital tools, therefore the company was pushing for mandatory reporting and expanding the conversation on AI  and environmental sustainability by putting people at the heart of the issue and start talking about digital rights and climate justice as a way to move the conversation forward.

 

Raashi Saxena also agreed with a more human-centered approach to technology and stressed the importance of bottom-up approaches as a way to foster inclusivity, incentivise disadvantaged groups and create more awareness of the impacts of AI in our daily lives. Developing on this bottom-up approach Renata Avila referred to new constitutions - such as in Chile - being written by citizens and pointed out that these offer a great opportunity to include this topic by creating a necessary set of rules that can be used for future technologies.

 

On the final round of statements the panel highlighted previous positions by calling for:

  • A renaissance of democracy and the rule of law and a renaissance of multilateralism which is embedded in a collective vision and offers global rules  for the future which are fair to all
  • More cooperation among all parties. AI as the last chance to bring the world together
  • A more equitable place where big, small, advantaged, and disadvantaged countries can work and compete together
  • A digital transition that is feminist and sustainable
  • The challenge of dominant narratives of AI and the creation of narratives that put the public interest at the heart of AI to create the sustainable equitable narratives that we need.

 

The session ended with a question by  Zoom participants (Law students at Greenwich):

Is a human rights approach enough to keep the human in the loop?