top of page

UNESCO’s ‘Ethical’ Artificial Intelligence: Programmable Social Norms?

Laura Breckon | Cyber & Technology Fellow

On the 2nd of December 2020, UNESCO's ‘Recommendation on the Ethics of AI’ project was recognised as one of the world’s greatest AI initiatives by tech and policy think-tank giants, Telefónica, OdiseIA and Compromiso Empresarial. UNESCO is one of ten winners recognised for their contributions to the development of AI technologies with a positive social and ethical impact–an honour jointly bestowed. UNESCO’s ‘recommendation’ constitutes an ambitious project to attempt to direct artificial intelligence systems to prioritise the betterment of human welfare. It may be possible to create such a framework, however, could it ever be implemented?

The project is a simple concept: it seeks to translate political norms into computable AI cognition via amassing a database of predetermined responses to ethical scenarios. This has the potential to have a vast number of applications, from informing methods of education and pedagogy (potentially bridging learning divides across communities), to information management (imagine the perfectly curated museum), to speeding up judicial and legislative processes. The creation of this database is for the purpose of creating a ‘humanistic’ AI world. In a certain sense, it may be seen as a long-awaited antidote to contemporary prevailing pessimism felt about technology’s increasing encroachment into our social and personal lives and economies.

Where critics may be inclined to say that the advancement of AI technologies is being undertaken for advancement’s sake–to please investors by increasing efficiency and raising productivity, whatever may be the long-term implications for the broader community–UNESCO proposes a solution with this project. They purport to provide a framework of normative principles, aligned with the United Nations’ Sustainable Development Goals, that AI must comply with within its cognition. Working to provide a means by which this can be achieved, it boldly posits that AI technologies cannot be lawless and can abide by ethical norms.

Amidst the pandemic, this idea would strike a chord with most people. The ‘humanity’ that is lost when communication is limited solely to digital channels has been markedly noticed across the globe. Technology, despite its convenience, is cold, and functioning solely to maximise Key Performance Indicator scores and minimise inefficiencies. Those concerned by this may see such an idea as a welcome relief.

In its simplicity, therein also lies its Achilles' heel; could any database, no matter how vast and precisely managed, represent truly the diversity of human values and turn that knowledge into a truly representative ‘universal’ ethical mind? Are ‘universal’ ethics even possible? UNESCO’s framework seeks to resolve this with mandatory human oversight from representatives of a variety of UNESCO’s member states. This potentially serves another function: it is hoped that this could help address the concerning phenomenon of AI’s mimicking racist behaviours, notably racial bias in facial recognition technologies. By informing its database from a deliberately diverse data pool, UNESCO’s ‘Recommendation’ database may guide AI development away from such outcomes.

Even if these wrinkles are ironed over to some satisfying extent, an ‘ethical’ AI may, procedurally or otherwise, nonetheless be ignored. Human error will remain a force to be reckoned with. Therefore, the question still remains as to whether this framework would ever be accepted and implemented widely enough to achieve its goals. UNESCO’s 'Recommendation' database is far from complete, and it is still in its early stages of development. In any case, it is a timely undertaking and one that underscores a collective desire to keep humanity in technology, giving us reason to look forward to a more optimistic future.

Laura Breckon is the Cyber & Technology Fellow for Young Australians in International Affairs.


bottom of page