top of page

Global Governance and Artificial Intelligence

Edward McCann | Cyber and Technology Fellow

Image Credit: Michael Dziedzic

Google engineer Blake Lemoine generated global headlines in June after he declared that the company’s LaMDA chatbot had developed sentience. The highlights of Lemoine’s edited conversation with LaMDA include comparing being turned off with death, multiple quotes that suggest self-awareness, and claiming to have a soul. While invoking significant public interest, as well as countless sci-fi comparisons, experts were quick to dismiss Lemoine’s claim of sentience; with Stanford Professor John Etchemendy arguing that, as a chatbot, LaMDA is simply a software program designed to respond to human conversations and lacks the ability to otherwise interact with the world.

An important side effect of the LaMDA episode was the conversations it started about an important but under discussed public policy questions of our time. How can we regulate the development of AI? Furthermore, given the inherently transnational nature of technology, there is a heightened emphasis on the role global governance might play in regulating the future development of AI. Appreciating the complexity of this issue requires understanding the current and future advantages that AI has provided.

In his 2016 TED Talk, philosopher Sam Harris suggested that a super intelligent AI would generate such a sizeable advantage that it could cause ”our species to go berserk”. Harris is referring to the potentially insurmountable advantage that such an AI would provide to a nation’s ability to wage war; from speed of decision making, superior logistics coordination, and an enhanced cyber warfare capability.

Developments in AI technology have also proved exceptionally useful in non-military fields and resulted in a number of advancements, including more accurate medical diagnostics, breakthroughs in automation, and far superior data analysis capabilities. The economic impact of further advancements in AI will likely see a restructuring of the labour market. For example, the World Economic Forum’s 2020 Future of Jobs report estimated that AI driven automation will see 85 million jobs lost but 97 million created by 2025.

Given the significant advantages that the development of highly advanced systems would yield, it is little surprise that China and the United States are engaged in frantic competition over AI. This race has important implications for the future of international security, and, in recent years, China and the United States have both made efforts to enhance their AI capabilities.

The Biden administration launched the National AI Research Resource Task Force in June 2021 and established the National AI Advisory Committee in April 2022. Both organisations share the same goal of increasing America’s AI competitiveness by coordinating funding and facilitating advanced research. Furthermore, the United States has taken steps to enhance its domestic semiconductor capacity, including providing federal subsidies to the Taiwan Semiconductor Manufacturing Company to help establish a plant in Arizona.

In parallel, China has increased efforts towards the development of AI technology. Beijing released its strategic plan for AI development in 2017, titled the ‘New Generation Artificial Intelligence Development Plan’, which set 2030 as the deadline for when China should be leading the world in AI technology. China has so far made extensive progress in reaching that goal. In 2021, China was responsible for one-third of all the AI journal papers and AI citations published worldwide.

Both nations are also deeply invested in efforts to shape the governance surrounding AI technology. The United States has historically found success in efforts to shape global governance, largely due to their role in establishing and leading key multilateral organisations. In recent years, the influence of the United States has waned within these organisations, exacerbated by the decision from President Trump to withdraw from UNESCO in 2017 and the UN Human Rights Council in 2018. UNESCO is a particularly important organisation, as UNESCO passed the first global agreement on AI ethics in November 2021. Although it re-joined the Human Rights Council in 2021, the United States has still not re-joined UNESCO and its efforts to influence global AI policy will be hamstrung until it does.

China is also deeply invested in efforts to shape the global governance of AI and technology. China’s ‘Global Initiative on Data Security’ in 2020 was a concerted attempt to influence the development of global data standards. Recently released frameworks for certifying and regulating AI should be understood as attempts to shape international standards. Unlike the United States, China has faced sharp criticism for its use of AI.

China employs AI technology to monitor the Uyghur population in Xinjiang as well as to operate its Social Credit System. In the case of the Uyghur Muslims, a recent assessment from the United Nations Human Rights Office recently accused China of serious human rights violations that may amount to crimes against humanity.

When it comes to developing effective global governance, however, it is useful to ensure the active participation of countries that are not great powers. Not least of all because if AI is misused, nations that are less advanced technologically are more likely to bear the consequences. It is important that any governance of AI clearly defines the values that should structure the use of this technology, particularly if and when it becomes increasingly advanced.

To that end, the European Union has been developing comprehensive legislation that would regulate the use and development of AI across Europe. The ‘AI Act’ establishes 4 levels of risk in AI: unacceptable risk, high risk, limited risk, and minimal risk. For example, unacceptable risks include government social scoring systems while minimal risks include AI based video games. Classifying by risk is a useful step in delineating AI’s many beneficial applications from those that pose a significant risk for malicious use. Any regulatory framework of AI will need to be dynamic to sudden breakthroughs and cognisant of the borderless nature of technology.

A common critique of efforts to institute effective global governance is the lack of enforceability. Thus, given its multination makeup and market power, where the European Union lands on this proposal will have significant influence on how other nations regulate AI and broader multilateral efforts.

Left unchecked and unrestrained, AI has the potential to escalate competition between China and the United States into conflict. As the domestic policies in both countries will favour competition over risk mitigation, global governance is the best option available for successfully implementing the guard rails needed to prevent the misuse of AI technology.

Edward McCann is the Cyber and Technology Fellow for Young Australians in International Affairs.


bottom of page