Back to All Events

Aligning Control and Accountability in the New AI Supply Chain: Who Answers when AI Misbehaves?

On the sidelines of the AI for Good Summit 2025, please join us for a conversation focusing on our capacity to align control and accountability for all stakeholders in a new AI supply chain. As AI evolves from static tools to autonomous, agentic systems, the question of who holds responsibility, and under what conditions, has never been more urgent.

This workshop brings together experts in policy, technology and industry, alongside leaders in academic research and AI governance to examine how accountability and control are distributed, and how fairness and decent work can be preserved across increasingly complex AI supply chains. We’ll explore how distributed systems, platform work, and AI-as-a-service models are shifting the burden of responsibility, and ask how governance frameworks must evolve in response.

Speakers Include:

Gudela Grote, Professor of Work and Organizational Psychology, ETH Zurich and TASC Platform Co-Chair

Babak Hodjat, Chief Technology Officer, AI, Cognizant

Mark Nitzberg, Interim Executive Director, International Association for Safe & Ethical AI (IASEAI)

Intended Outcomes

We aspire to develop actionable recommendations for ongoing discussions on crucial questions such as the governance and regulation of agentic AI, platform work, algorithmic management, and responsible AI innovation – centering not only in innovation and efficiency, but also safety, fairness, dignity, and sustainability for developers, implementers and users of AI.

Context

To date, much of the discussions on the impact of AI-based systems has focused on two distinct issues: data used in AI models (e.g. concerning biases, privacy, security) and AI's potential for replacing or augmenting human work. These seemingly unrelated concerns become interconnected once the responsibilities of stakeholders along the AI supply chain are examined: How are control and accountability for system functioning and outcomes, and for creating and maintaining decent work, distributed across different stakeholders?

  • Developers may be held accountable for but may not have sufficient control over the quality of training data nor how they are produced, for instance, by data annotators working under the most precarious conditions.

  • Organizations commissioning systems for algorithmic performance management may have insufficient insights into these systems to live up to their responsibility as employers.

  • AI systems may have been developed to augment human decision-making, but may end up being used for fully automated decisions with no human oversight to guarantee their accuracy.

  • AI adoption is thus not only a technical challenge but a governance issue with implications for safety, equity, job quality, and workers' rights. In AI supply chains characterised by multiple decision-making loops, the question of how to enable dialogue, transparency and alignment amongst agents and actors is fundamental.

Agenda

14:00 Welcome

14:10 Firestarters:  

14:50 Open Discussion

15:15  Coffee Break

15:30  Roundtable Discussions

Introduction from Mark Nitzberg, sharing insights at the leading edge of global research and emerging questions for policymakers.

  • Roundtable 1: Human Oversight along the AI Supply Chain - moderated by Gudela Grote

  • Roundtable 2: The New AI Supply Chain - moderated by Babak Hodjat

  • Roundtable 3: Social Dialogue on AI - moderated by Maria Mexi, Senior Advisor, Labour and Social Policy, TASC Platform, Geneva Graduate Institute

  • Roundtable 4: Participatory AI and Bottom-up Innovation - moderated by Stéphanie Camaréna, Founder and CEO, Source Transitions

16:30  Feedback and suggestions for moving forward 

17:00 Discussion Wrap-up and Reception  

Previous
Previous
March 26

What’s Next on Beyond GDP: A Roadmap to Inclusive and Sustainable Economies for People and Planet

Next
Next
July 9

AI futures: Reimagining learning and work in 2035 and beyond