Lee Tiedrich on the importance of regulations for AI policy

Ethical Technology Professor at Duke University, Lee Tiedrich discusses AI policy, how to make it effective and the importance of regulation

As previously shown by AI Magazine, nations are now investing in artificial intelligence (AI), quantum computing and synthetic biology to bolster their security defences as national security threats across the globe become more complex.

In today’s digital environment, states, criminals and terrorists are able to exploit new ways to harm their adversaries. With data harvesting, for example, nations can lose crucial data and open the door to new types of attacks.

The North Atlantic Treaty Organisation (NATO), has announced its 18-point AI strategy and launched a “future-proofing” fund with the goal of investing around US$1bn and the UK Government revealed its National Artificial Intelligence Strategy. It was described as ‘representing the start of a step change for AI, recognising the power of the technology to increase resilience, productivity, growth and innovation across private and public sectors.’

To learn more about the importance of AI, the relationship between the private and public sector and ethics when formulating AI strategy, AI Magazine spoke to Lee Tiedrich, Ethical Technology Professor at Duke University.

Tiedrich is a Distinguished Faculty Fellow in Ethical Technology with the Duke Initiative for Science & Society and has a dual appointment in Duke Law School. She is a widely recognised leader in artificial intelligence, data, and emerging technology matters.  She has been selected to serve in the Global Partnership on AI (GPAI) Multistakeholder Expert Group, which was conceived of by the G7 and now includes 25 countries, and she is a co-chair the GPAI MEG IP subcommittee.  

What needs to be done to ensure AI policy is effective?

To help ensure AI policy is effective, policymakers need to support the adoption of laws, policies and standards that foster the development of responsible AI and that implement and operationalise widely embraced AI ethical principles, such as those adopted by the OECD.  This should be done in a way that embraces sound scientific principles.   Here are some key steps:

Policymakers should continue to embrace a “risk-based and proportionate” approach to AI regulation.  This recognises that AI covers a wide range of use cases and technologies, and this approach should help to ensure that the regulatory requirements for an AI system are commensurate with its risk profile.  For example, a chatbot that recommends movies should not be regulated in the same manner as an AI-enabled medical device used to help treat patients.  

In addition to laws and regulations, policymakers should continue their focus on encouraging the development of AI technical benchmarks, frameworks, and standards, analogous to what we have in the cybersecurity context.   As an example, NIST is working on an AI Risk Management Framework.  These types of tools can l provide clearer guidelines for AI developers and deployers, help customers differentiate between trustworthy and untrustworthy AI offerings, and promote competition by lowering the barriers to entry for emerging companies who meet the applicable AI benchmarks or standards.  As with AI regulations, these tools should be anchored in sound scientific principles.

Policymakers should continue to encourage and facilitate voluntary data sharing and make government data sets available in ways that comply with privacy and other applicable laws and appropriately address proprietary rights issues.  This work is underway in several jurisdictions, including the EU and the US, and includes efforts to promote data interoperability through the creation of common data formats and APIs.  Developing a broader suite of standardised agreements tailored for data sharing and continuing to focus on privacy enhancing technologies (PETS) also are key. These efforts collectively should lead to better access to reliable data sets, which in turn, should enhance AI trustworthiness and promote competition.

Finally, policymakers need to continue to pursue policies that promote AI education, address workforce issues,  and unlock computing capacity, particularly for academia and small and medium-sized businesses.  An example is an initiative to create a US National AI Research Resource.  

What needs to be done with AI regulation to ensure AI is ethical within government operations?

For governmental use of AI, attention must be paid to the procurement process and the governmental auditing and accountability standards.  As an example, some work is underway in the United States exploring how AI should be addressed in the procurement process, such as through a pilot program being conducted by the US Department of Defense’s Joint Artificial Intelligence Centre.  Similarly, in June 2021, the US Government Accountability Office published a report entitled, “Artificial Intelligence:  An Accountability Framework for Federal Agencies and Entities.”  Accountability is important, particularly when governmental use of AI impacts the safety or the welfare of individuals.  As with AI regulation, standards, and frameworks ,however,  it is important that government AI procurement and accountability standards be grounded in scientific principles that can be operationalised.  

How can policymakers ensure that AI creates a better future for their populations?

Policymakers should continue to invest in AI use cases that promote the public good, such as using AI to address climate change, improve healthcare and education, and make government services more efficient and accessible to individuals.  

While promoting AI development, policymakers also need to maintain their sharp focus on ensuring that AI is not used in ways that violate personal privacy or undermine our democratic values.  This applies to AI developed within their borders, as well as AI developed in China and in other countries that do not have the same respect for human rights..  This work is ongoing in various jurisdictions, including in Europe and in the US, which unveiled plans to create an AI Bill of Rights.  It also implicates our trade laws, particularly when addressing AI developed in countries where human rights aren’t protected.  

Finally, policymakers should maintain their emphasis on fostering responsible innovation and ensuring that the current and future workforce is prepared to capitalise on opportunities presented by AI.  

Share

Featured Articles

Accenture Commits to Expanding its AI Vision with Adobe

Focusing its AI strategy on company transformation, Accenture partners with Adobe to develop industry-specific solutions using Gen AI to empower businesses

Businesses are not ‘Data Ready’ for Gen AI, says Alteryx

A report by Alteryx finds that organisations must prepare, as they are not ready to unlock real value from Gen AI as a result of insufficient data stacks

TacticAI: Google DeepMind Pioneer a Sports-Led AI Assistant

Google DeepMind’s TacticAI has been launched as part of a research collaboration with Liverpool FC to transform the sporting experience with AI

Bumble: Harnessing AI to Power Human Relationships

Data & Analytics

Kheiron Medical Technology can Detect Cancer with AI Test

Data & Analytics

Who is Mustafa Suleyman? DeepMind Founder Turned AI CEO

Machine Learning