Madeleine Nugent | Cyber Security Fellow
Traditionally, the concept of warfare has centred on the conventional methods of the armed forces. However, the advent of the internet – and the speed, propagation and utilisation of cyberspace – modern warfare has come to be viewed as a hybridisation of strategic components. From political, economic, social and digital platforms for execution, ‘hybrid warfare’ has blurred the distinction between our perception of what we determine to be times of peace and that of war.
Hybrid warfare entered Western political discourse upon what has come to be labelled the ‘fourth industrial revolution’. The fourth industrial revolution, ushered in like those before it on the back of technological innovations (steam, electricity and robotic manufacturing) naturally brings with it a sense of uncertainty. The trajectory of technological advancement has left state apparatuses struggling to maintain the pace of this change and, consequently, a public that is tentative toward the reception and encouragement for technological transformation.
Profound among these technologies, and perhaps stoking the most apprehension, is artificial intelligence (AI) and machine learning (ML). AI, which was initially conceived as a digital replication of the human brain, has come to encompass large data storage and processing capabilities that are further enhanced by such technologies’ ability to learn and adapt through learned experiences. Through the convergence of conventional and unconventional tools, the severity of its effect in hybrid warfare is fuelled by an accelerated technological industry and exacerbated by the power diffuser that is cyberspace.
The debate surrounding AI and ML technologies heavily focuses on ethics, privacy, and transparency. From unmanned vehicles, the weaponisation of ‘big data’, and the use of facial recognition software to target specific subgroups within a population, AI/ML is a driving force in hybrid warfare.
With exposure to negative examples of AI’s use, there is no surprise that public discourse is often adverse concerning national implementation of the technology. But AI/ML also has the ability to assist economic, health, security and social development. In order to ensure that we shape the environment to our demands and potential, Australia’s position in this revolution requires a hybridised approach.
Citizens are increasingly being called upon in response to security threats in 21st century democracies. AI/ML presents Canberra with a great new case study to test hybridising its strategic approach to policymaking and implementation that includes citizen consultation. Citizen participation in discussion and the security apparatus will encourage trust and resilience between government and civil society as well as between citizen and machine. Trust is key to how we adapt and shape this new environment as it encourages adoption and development of these technologies for the ‘public good’.
Like jury duty in Australia – one of our four responsibilities as an Australian citizen – a ‘citizen jury’ can promote community engagement and shape policy for the public good. A citizen jury aims to encourage participatory engagement in workshopping complex problems that are often dominated by expert stakeholders, academics, or government apparatuses.
Whilst sometimes criticised as a means for legitimating already implemented policy, citizen juries also offer and facilitate open political discussion and involvement where submission papers or town hall-style meetings often do not work. They have been used by democracies worldwide, such as Germany, the United States, and the United Kingdom on matters of health, security, and crime prevention. In 2016, Australia even conducted citizen juries concerning South Australia’s role in the nuclear industry. As AI/ML is on the cusp of utilisation in Australia, there has never been a better time to engage in public debate concerning how we use this technology in the future – crafting policy from the onset.
It goes without saying that the Australian government has made an amiable start regarding AI/ML. As a ‘strategy for the future,’ the Minister for Industry, Science and Technology released a discussion paper in 2019 encouraging public conversation about the use of AI in Australia. The purpose was to develop an AI Ethics Framework with initial discussion facilitated through submissions. Unsurprisingly, transparency, accountability and security were among the main concerns for the use of this technology. For something such as AI/ML, a citizen jury could assist in encouraging public reception by mitigating concerns such as these, and shaping our environment in a way that is beneficial for the public good; for our democracy, and born from our democracy.
We must not be deterred by the ill demonstration of AI/ML in foreign countries. Discussion, development and incorporation of AI/ML into our everyday lives will encourage greater security and potentially resilience against its malicious use by adversaries during this uncertain time.
Madeleine Nugent is the cybersecurity fellow for Young Australians in International Affairs.