Who Answers when AI Misbehaves?
Event highlights from “Aligning Control and Accountability in the New AI Supply Chain: Who Answers when AI Misbehaves?”, 7 July 2025, Geneva.
By Kitrhona Cerri, Executive Director, TASC Platform
Since the public release of ChatGPT and other large language models in late 2022, the role of AI in daily life and institutional decision making has expanded at extraordinary pace. What began as prompt-response interfaces has rapidly evolved into dynamic infrastructures, known as agentic systems that adapt to context, respond autonomously, and influence outcomes in real time and across entire workflows.
“These aren’t just tools acting alone. We are seeing systems of agents talking to each other, sometimes, even negotiating outcomes on our behalf.” Babak Hodjat, Chief Technology Officer, AI at Cognizant
Governance responses, however, remain reactive and fragmented, struggling to match the scale, speed, and complexity of these emerging systems.
Facing growing questions around risks, trust and institutional readiness, the TASC Platform convened a high-level workshop on the eve of the AI for Good Global Summit 2025. The session brought together experts from policy, technology, industry, and academia to examine how accountability and control are distributed across increasingly complex and novel AI supply chains, through a series of interconnected questions.
Discussions covered the conditions needed for meaningful human oversight, as new agentic architecture is reshaping roles and responsibilities. We explored how social dialogue must adapt to the realities of algorithmic management, and how participatory approaches can embed accountability from the start. All of this, while asking an overarching question: how must governance frameworks evolve in response?
Accountability Without Control?
Accountability, once distributed across design teams, institutions, and oversight bodies, is quietly shifting downstream, where oversight is often weakest and impact is greatest. For TASC’s Platform Co-Chair, Professor Gudela Grote, a distinguished authority in work and organization psychology, this move to the margins signals a deeper challenge. Accountability, she noted, is only meaningful when paired with control. Control, in turn, does not solely depend on access to systems, but on the ability to act within them, with clarity of purpose, the autonomy to intervene, and trust in how roles and risks are shared. This means it is not only a technical fix, it is a design question, and ultimately a governance one. When users are expected to verify, interpret and override opaque outputs without insights into where the data came from or how decisions were made, trust is misplaced.
“We do not even always trust human judgement, so why are we so quick to trust AI systems.” Professor Gudela Grote, TASC Co-Chair, ETH Zurich
Amir Banifatemi, Board Member of the International Association for Safe & Ethical AI, and Chief Responsible AI Officer at Cognizant, extended the point:
“We put requirements on AI that we don’t put on ourselves. We need to move from the idea of responsible AI to the responsible use of AI.”
Mark Nitzberg, Executive Director of the Center for Human-Compatible AI at UC Berkeley, brought the challenge into sharp relief. As he described it,
“We are not facing a wave, we are already waist-deep in the tide.”
From underwriting to hiring, AI systems are already shaping decisions with far-reaching implications. In this landscape, he urged institutions to create space for experimentation, not just in models, but in oversight. Governance, he noted, must be adaptive enough to learn in real time, and robust enough to surface where responsibility lies.
Governing the New AI Supply Chain
As agentic systems become more sophisticated, decision-making is no longer linear. Responsibility is increasingly diffused across networks of interacting models, where actions escalate, outcomes evolve, and decisions unfold without always being visible to those they affect.
In this growing complexity, Babak Hodjat, Chief Technology Officer for AI at Cognizant, challenged us to think differently about accountability. The result, he noted, is a chain of decisions where accountability blurs.
“The good news is, we still have agency over the agents. But only if we define it well. When I say we have agency, it’s not a solved problem, it’s the beginning.”
Thus, agency needs to be real, not rhetorical, by extending beyond the realm of developers to ensure those affected by AI decisions have the means to understand and shape them.
Social Dialogue Is Not Optional
If governance must begin where systems are built, it cannot stop where they are deployed. Maria Mexi, Senior Advisor on Labour and Social Policy at the TASC Platform, brought the conversation back to where AI meets everyday working lives.
Unlike the more familiar concerns around job loss, algorithmic management does not replace workers, it manages them, by increasing surveillance, compressing decision time, and redefining what productivity looks like.
Today, more than 80% of large companies rely on algorithmic management tools, shaping daily decisions. But for most workers, these systems remain invisible until they fail. As Christy Hoffman, General Secretary of UNI Global Union, put it:
“The people most impacted by AI at work are the least consulted. That’s a failure of design, and of governance.”
This makes social dialogue not just a matter of labour rights, but of governance design. Without early consultation, AI systems risk locking in decisions before workers even see them, leaving fairness to be negotiated after the fact.
Stéphanie Camaréna, CEO of Source Transitions, reinforced how meaningful inclusion of users, practitioners, and communities is necessary far earlier than most systems allow.
“If we really engage people from the start, and understand the real experience of the user, we can embed responsibility, rather than retrofit it once the solution is already out there.”
Designing for Shared Accountability
When dialogue starts upstream, transparency and trust are built into the system from the outset. This, in turn, surfaces trade-offs, and calls for defining fairness in context and anchoring accountability.
With multiple fairness metrics already in use across the field, the challenge is not a lack of tools, but a lack of dialogue. What counts as fair, and for whom, are not technical questions alone, but governance issues that must be treated as such.
Shaping the Ecosystem We Need
For governance to keep pace with AI’s evolution, and to become anticipatory rather than reactive, there is a need for dedicated spaces that draw on foresight, foster collective participation, and avoid fragmentation. This means not multiplying forums, but deepening cooperation through spaces where technical, institutional, and social perspectives come together with a shared sense of responsibility.
Initiatives like the International Association for Safe & Ethical AI (IASEAI) and convenings such as the AI for Good Global Summit are starting to build this connective tissue, laying foundations for a more coherent ecosystem of accountability and care.
The TASC Platform contributes to this effort as a neutral convener, trusted community builder, and system-level collaborator. We invite you to join us to connect agendas, surface shared priorities, and support those shaping the future of work in real time.