Edinburgh Declaration on Responsibility for Responsible AI

Shannon Vallor
7 min readJul 14, 2023
A window of 3 images. On the right is a photo of a big tree in a green field in a field of grass and a bright blue sky. The two on the left are simplifications created based on a decision tree algorithm. The work illustrates a popular type of machine learning model: the decision tree. Decision trees work by splitting the population into ever smaller segments.
Rens Dimmendaal & Johann Siemens / Better Images of AI / Decision Tree / CC-BY 4.0

Background: In 2022, the Trustworthy Autonomous Systems (TAS) programme funded by UK Research and Innovation (UKRI) brought together experts in key disciplines across the UK to investigate the responsibility dimension of AI-driven autonomous systems, especially in areas where we might realistically expect that AI systems will be used to complement or substitute for human decision making.

On the 10th of July 2023, the four Responsibility project teams met in Edinburgh, Scotland to share the preliminary findings of our research, and to identify any common ground across our very different approaches to the challenge, which range across computer science, robotics, philosophy, law and the social sciences. Our declaration here, which will soon be followed by an expanded version including concrete examples of application to AI and autonomous systems, reflects that common ground.

We hope it will start a wider conversation about what matters most when we talk about “Responsible AI’ and responsibility for autonomous systems.*

*This declaration is a collaboration among researchers in the UKRI Trustworthy Autonomous Systems (TAS) programme who work on Responsibility in the context of design, development, procurement, use, governance and regulation of AI and autonomous systems. It does not necessarily represent the views of the wider TAS programme, its partners or universities.

Preface: Our work has crystallised a shared belief in the possibility of responsible development and governance of AI and autonomous systems (AI/AS). However novel the challenges these technologies present, they are not beyond the reach of responsibility, as a set of mature social and technical practices that underpin trustworthiness. We have been here before, with other inherently risky technologies that seemed ungovernable. And yet, those technologies that became part of the trusted infrastructure of many societies — biomedical technologies, aviation, automobiles, steam power — are now largely governed in ways that enable innovation and its benefits without pushing all of the risk and cost onto unprotected publics.

We must advance AI/AS innovation toward its own stage of trusted maturity: by creating, promoting, and embedding robust and committed responsibility practices in the development and use of AI/AS. Our joint declaration proposes four shifts in the framing of responsibility that we believe are needed for the responsible development, procurement, use and governance of AI and autonomous systems, in the UK and beyond.

First, it may help to briefly clarify some of the different meanings that ‘responsibility’ can have in this context:

A table outlining five types of responsibility relevant to AI and autonomous systems: Causal (what event made this event happen?); Moral (who is accountable or answerable for this?); Legal (who is or will be liable for this?); Role (whose duty was it, or is it, to do something about this?); and Virtue (how trustworthy is this person or organisation?)

Four Key Shifts on Responsibility Needed to Achieve Responsible AI

1. Accepting responsibility over describing and ascribing responsibility

2. Seeing responsibility as relational over seeing responsibility as an agent property

3. Prioritising responsibility as attending to vulnerability over responsibility as blame

4. Focusing on sustainability of responsible AI/AS innovation over pace of innovation

ANALYSIS:

  1. Accepting responsibility over describing and ascribing responsibility

Our research projects draw from technical, social, and humanistic disciplines to study distinct forms of responsibility (moral, legal, causal, role or virtue) as they apply in practice to responsible AI and autonomous systems (AI/AS) development, deployment and governance. Much of our work describes how, when, and which of these notions of responsibility should be attributed to the different types of agents (human and machine) that make up sociotechnical systems like AI/AS.

A deep understanding of these dimensions is essential for the future of responsible AI/AS. Appeals to responsibility are only effective when we can be clear and specific about what it means in any given context. But we hold that however rigorous our research on responsibility, its future impact depends upon the collective will of the actors who shape and govern our tech ecosystem to act on it.

Responsibility can be analysed in minute detail, but if responsibility is never accepted, never taken by the actors with the power to do so, these studies will be impotent, their power to guide action wasted. Our research must, to have its intended impact, be paired with bold and ongoing commercial and public commitments to accept responsibility for the power and consequences of autonomous systems.

2. Seeing responsibility as relational over responsibility as an agent property

The single thread that binds our projects is the power of responsibility to enable and sustain trustworthy relationships between people, organisations, institutions and publics. How AI/AS impact these relationships is why responsibility matters in this context.

Yet responsibility is too often viewed solely as an isolated property of an agent or system. We seek to find the ‘responsible’ component that caused a harm or accident, or the ‘responsible’ human that is morally accountable or legally liable for it, without considering the relationships that such attributions of responsibility help to hold together.

Even when we seek to understand what makes an agent in some way responsible for an action or decision, it’s the human relationships affected by that action or decision that make this question so important for us to answer. These relationships, and the responsibilities they entail, vary and shift over time and in different contexts. For this reason, mere promises to ‘act responsibly’ or ‘design responsibly’ are less helpful than articulating the ongoing, evolving duties of care that publics rightly expect and that people, organisations and institutions must fulfil to protect the specific relationships in which they and AI/AS are embedded. We hold that the relationships that responsibility serves must move to the centre of ‘responsible AI/AS’ research and its uses.

3. Prioritising responsibility as attending to vulnerability over responsibility as blame

Most studies of responsibility in AI/AS focus on mechanisms for responsibility attribution and the distribution of blame, liability or other obligations to responsible parties. While this work is essential, it is often silent about the vital political function of responsibility: to establish conditions of legitimacy for the human exercise of power, by requiring those who use or unleash a power to demonstrate due care for the interests of those it makes vulnerable.

While we must do more to identify and hold accountable those who use AI/AS in irresponsible and harmful ways, responsible AI/AS is not merely about finding and sanctioning ‘bad actors.’ All who exercise and unleash the material and economic power of AI/AS, even with the best of intentions, must start attending — and be required to attend — responsibly to the vulnerabilities that AI/AS create, perpetuate, amplify or relieve.

Foregrounding the needs of the most vulnerable, led by their knowledge of the lived impact of new technologies, also generally results in better protections for all; it is the most reliable way to ensure that AI/AS are safe and beneficial for everyone. Otherwise, by unjustly endangering human lives and publics, those who build and deploy these tools surrender the social license for these technologies as legitimate powers.

4. Focusing on sustainability of responsible AI/AS innovation over pace of innovation

Much of the debate around AI and autonomous systems focuses on the ideal pace of responsible innovation — calls to speed it up (for those whose primary aim is economic growth and rapid realization of technology’s potential benefits) and calls to slow it down (for those whose primary aim is the safety, security and competent control of AI/AS).

While this debate exposes important concerns, we think both impulses result from a misplaced focus on the overall pace at which AI/AS innovation happens, rather than what is needed to make AI/AS innovation sustainable, in the UK and globally.

Some applications of AI/AS, for example in the health and climate tech areas, warrant accelerated research and development, in responsible ways. Others, such as the automation of moral and political human judgments with irreversible and severe consequences, arguably require delay or indefinite pause. Yet we call for centering the overarching question: how to make a responsible AI/AS innovation ecosystem sustainable.

Sluggish AI/AS innovation that stalls from lack of investment and public confidence will not serve the UK or the world well. Yet neither will rapid, unchecked innovation that lasts only a few years or decades before running out of public goodwill and available resources.

AI/AS are already placing great demands on our limited global resources (water, energy, rare earth minerals), new stresses on social and economic resilience, and growing challenges for security, human rights and social cohesion. These costs and stressors must be managed responsibly and holistically, with an eye to AI/AS innovation that can take us all much farther, not simply faster.

We invite those from across the ‘Responsible AI’ community and beyond to comment and help us develop and refine these provocations; in the coming weeks we will share an expanded version with concrete examples of applications of these four shifts in Responsible AI research and innovation.

Authors and Signatories

Prof Shannon Vallor*, The University of Edinburgh (TAS Responsibility) *corresponding author

Joanna Al-Qaddoumi, University of York (TAS Responsibility)

Prof Stuart Anderson, The University of Edinburgh (TAS Governance and Regulation)

Dr Vaishak Belle, The University of Edinburgh & Alan Turing Institute (TAS Governance and Regulation)

Prof Michael Fisher, University of Manchester (TAS Verifiability and TAS Responsibility)

Bhargavi Ganesh, University of Edinburgh (TAS Governance and Regulation)

Prof Ibrahim Habli, University of York (TAS Responsibility)

Dr Louise Hatherall, The University of Edinburgh (TAS Responsibility)

Dr Richard Hawkins, University of York (TAS Responsibility)

Prof Marina Jirotka, University of Oxford (TAS Responsibility)

Dr Dilara Keküllüoğlu, The University of Edinburgh (TAS Responsibility)

Dr Nadin Kokciyan, The University of Edinburgh (TAS Responsibility)

Dr Lars Kunze, University of Oxford (TAS Responsibility)

Prof John McDermid, University of York (TAS Responsibility)

Dr Phillip Morgan, University of York (TAS Responsibility)

Dr Sarah Moth-Lund Christensen, University of Leeds (TAS Responsibility)

Professor Paul Noordhof, University of York (TAS Responsibility, Philosophy)

Dr Zoe Porter, University of York (TAS Responsibility)

Prof Michael Rovatsos, The University of Edinburgh (TAS Responsibility)

Dr Nayha Sethi, The University of Edinburgh (TAS Responsibility)

Prof Jack Stilgoe, University College London (TAS Responsibility)

Dr Carolyn Ten Holter, University of Oxford (TAS Responsibility)

Prof Tillmann Vierkant, The University of Edinburgh (TAS Responsibility)

Prof Robin Williams, The University of Edinburgh (TAS Governance and Regulation)

--

--

Shannon Vallor

Professor of Ethics of Data and AI at The University of Edinburgh, philosopher, author of Technology and the Virtues and the forthcoming The AI Mirror