How to shape accountable AI development and use

Gudela Grote, Professor of Work and Organizational Psychology, ETH Zurich and member of the TASC Platform Board of Advisors shares her latest thinking on how we can embed fairness and accountability into the development and application of Artificial Intelligence (AI) in the workplace.


AI-based systems "generate outputs such as predictions, recommendations, or decisions (...) (and) operate with varying levels of autonomy" (definition by the National Institute of Standards and Technology, 2023). AI has been proclaimed, among others, to significantly reduce the need for human labor even in highly skilled occupations, to shift control over workers from managers to algorithms, and to revolutionize education. While many of these predictions may overrate the impact of AI, there is no doubt that AI has already shaped and will continue to shape how we live and work.

Three topics appear particularly relevant for how our future with AI will unfold.

1. Will AI automate and/or augment human work?

Ever since the economists Frey and Osborne published their prediction of up to 50% of jobs being lost due to emerging technologies including AI, the future of work has been high on the agenda both in the public and in academia. Subsequent analyses have shown that these predictions most likely are exaggerated, especially because technology usually only affects certain tasks within jobs and not whole jobs. The general tenet remains, however, that the current landscape of occupations will be dramatically changed. In order to manage these changes well, both in companies and in the education system, a crucial question to answer is—as in all previous waves of technology development—whether AI is used to automate and/or augment human work.

2. Will AI become uncontrollable and with what effects for accountability?

The moratorium on generative AI development recently proposed by key technology developers themselves speaks to the intense worry of losing control over AI systems. Self-learning systems such as ChatGPT create the "black-box problem" referring to the fact that these systems are opaque and unpredictable even for their developers as they autonomously change with any new data available. Thus, AI is the first technology that can endogenously adapt and improve through its use, raising the fundamental question of who is in control of AI systems and who is to be held accountable for the consequences of their use. To date, from a legal standpoint it still has to be humans who are accountable. Accountability entails visibility (actors need to inform others of their actions), responsibility (actors need to fulfill the tasks that have been assigned to them), and liability (actors need to abide to law, regulations, and contracts). In any of its forms, accountability has to be aligned with control, otherwise actors are held to account for actions they had no influence over. Control implies not only influence over how an AI system operates, but also transparency and predictability of the AI so that influence can be applied most efficiently and effectively.

3. Will AI render the powerful even more powerful?

Emerging technologies always raise the question of whether and how power imbalances between employers and workers, between regulators and regulatees, or between business and public interests are affected. For instance, the internet and the decentralized forms of communication and coordination and the broad access to knowledge it permits are considered a democratizing force by many. Regarding AI, though, there appears to be a general apprehension that power imbalances will worsen. Algorithmic control of workers, the difficulties involved in independent oversight needed for effective AI governance, and the growing power of organizations that own large data sets and the resources for training complex models on those data are sources of this apprehension.

What to do?

Three core elements of accountable development and design of any new technology are transparency, participation and independent oversight. All three are hotly debated for AI. Regarding the blackbox problem it has been argued that accuracy gains by more complex models may be outweighed by the increase in transparency when simplier models are used. However, this argument only holds for comparatively straight-forward applications that are used to augment rather than replace human decision-making, for instance in medical diagnosis or personnel selection. Complex systems such as the large language models that have given rise to the most recent wave of concerns cannot be rendered explainable, but their outputs need to be closely monitored and continuously tested and validated by technology developers, technology users, and oversight bodies. The necessary feedback loops and joint decisions among multiple stakeholders can be established more easily if AI systems are intended for use at work rather than for private use. In work contexts, institutional and professional standards and practices can be employed to facilitate stakeholder involvement. Power imbalances among stakeholders may hinder effective dialogue and decision-making, though. National and international regulators have an important role to play to counteract these power imbalances. Regulation is likely to be more successful if it prescribes processes to be followed for stakeholder dialogue in "networks of accountability" than if it aims to constrain the development and use of AI.

Guiding AI design processes in networks of accountability

As early as possible in AI development, networks of accountability should be formed that include AI developers, current or potential AI users, and representatives from independent oversight bodies, for instance regulatory agencies in the intended application domains. The aim is to curate negotiations about how control and accountability is to be distributed among stakeholders in different phases of AI development and use. Organizational users thereby understand their role in providing high-quality data and the processes involved in validating, updating, and licensing AI systems. Technology developers learn about the difficulties in providing training data that do not introduce biases in automated decision-making and about user expectations and capabilities. User representatives help to avoid that accountability devolves to individual AI users, for instance the human occupants of a self-driving car, without adequate means of control over system functioning. Regulators gain the necessary insights into the actual functioning of systems that AI developers are often unwilling to give due to intellectual property protection. Establishing adequate visibility and responsibility during AI development and use reduces the likelihood that liability claims materialize. Thereby even powerful stakeholders may become willing to engage in accountability networks to co-create sociotechnical systems capable of reaping AI’s benefits and keeping AI’s risks at bay.

Previous
Previous

Step-by-Step Guide: Creating Powerful Short Videos with AI

Next
Next

Social Justice and the World of Work: Possible Global Futures