Skip to content
Teneo
3d,Illustration.,Artificial,Neuron,In,Concept,Of,Artificial,Intelligence.,Wall-shaped

Navigating the Global AI Governance Landscape: From Voluntary Standards to Legally Binding Rules

March 26, 2024
By El Mahdi Mouajib

Artificial Intelligence (AI) stands at the forefront of technological innovation, promising transformative impacts across various sectors, from healthcare and transportation to finance and governance. As AI’s capabilities continue to advance rapidly, so do the ethical, legal and societal implications it poses. Consequently, the imperative for establishing comprehensive governance frameworks for AI has never been more pressing.

Navigating the complexities of AI governance requires concerted efforts to ensure AI technologies are developed, deployed and managed responsibly. This is why over the past year, the international community produced several guidelines and standards for AI, some voluntary and some are binding in nature.

 

The European Union's Legislation

AI Act

The European Parliament just voted to adopt the world’s first comprehensive framework for regulating the risks of AI (AI Act) on March 13, 2024.

The rules apply to the development, deployment and use of AI in the EU single market or when it affects people that are living in the EU. The law adopts a risk-based approach, classifying AI systems into three categories – unacceptable risk, high-risk and limited or minimal risk. After the regulation is adopted, there will be a two-year grace period, under which companies can voluntarily join the AI Pact, an initiative to be launched by the EU Commission right after entry of the regulation this coming May.

AI Liability Directive

The proposal – currently still making its way through the EU legislative process – sets out procedural rules for civil claims regarding AI. Its main goal is to make it easier for claims to be taken by those who have suffered damage because of AI.

AI Product Liability Directive

The proposal will update the existing EU’s Product Liability framework to better reflect the digital economy and will include AI products within its scope.

Unlike the AI Act, both directives will not have direct effect in EU member states and will have to be transposed into national law by legislation, once fully adopted by the EU.

AI in the Workplace Directive

Discussions on the need to regulate the use of AI in the workplace also started, with both the EU Commission and the Parliament looking into the topic more closely. The Commission specifically is also contracting an external study on algorithmic management and use of AI technologies in the workplace. The study focuses on issues such as the impact on employers and employees; opportunities and challenges; and how existing EU laws already partially regulate the use of AI in the workplace. The study will most likely serve as a basis for an upcoming AI in the Workplace Directive, to be proposed by the incoming Commission post elections.

The Council of Europe’s Convention on AI

The Council of Europe – not to be confused with the Council of the European Union – has been working on a draft convention on AI, Human Rights, Democracy and the Rule of Law. The binding treaty aligns closely with the values and initiatives pursued by the Council of Europe, particularly concerning the ethical and legal implications of AI technologies on human rights and democratic principles.

The Council of Europe is an international organisation responsible for standard-setting on human rights, democracy and the rule of law in 46 states, including all 27 EU member states but also non-EU countries, such as the United States and Turkey.

The convention seeks to address key issues such as transparency, accountability, fairness and human oversight in AI systems. It emphasises the protection of personal data and privacy, as well as ensuring that AI technologies operate in accordance with fundamental human rights principles.

 

Voluntary Initiatives

The G7's Code of Conduct on AI

On October 30, 2023, the G7 countries reached an agreement to establish a set of International Guiding Principles on AI and a voluntary Code of Conduct for AI developers, pursuant to the Hiroshima AI Process. The principles intend to assist organisations developing, deploying and using advanced AI systems in promoting the safety and trustworthiness of the technology.

The Code of Conduct is intended to provide details and voluntary guidance for organisations developing AI, including complex generative AI systems. These include measures related to data quality, bias control, technical safeguards, etc. The code will be reviewed periodically through stakeholder consultations to ensure the measures remain relevant and future-proof.

Both documents represent a significant step in the global effort to coordinate on a global framework to promote the responsible and safe development and use of AI systems. While they are both voluntary, governments did endorse them and are currently encouraging businesses and other organisations to commit to them.

The Bletchley Declaration

The Bletchley Declaration on AI Safety was announced on November 17, 2023, by the British Government. This declaration marks a significant commitment to ensuring the ethical and responsible development and deployment of AI technologies. Key provisions of the Bletchley Declaration include:

  • Ethical Principles: The declaration emphasises the importance of adhering to ethical principles in AI development and deployment. It underscores the protection of human rights, privacy and societal well-being as fundamental considerations.
  • Interdisciplinary Collaboration: Recognising the complex nature of AI technologies, the declaration calls for increased interdisciplinary collaboration among researchers, policymakers, industry stakeholders and civil society organisations. This collaboration aims to foster a holistic approach to AI safety and governance.
  • Diversity and Inclusion: The Bletchley Declaration emphasises the importance of promoting diversity and inclusion in AI research and development. It highlights the need for diverse perspectives and voices to ensure that AI technologies serve the interests of all segments of society.
  • Public Engagement: The declaration underscores the importance of enhancing public understanding of AI technologies and their potential impact on society. It calls for increased efforts to engage with the public, raise awareness about AI safety issues and foster informed dialogue on the subject.

Overall, the Bletchley Declaration represents a significant step forward in advancing AI safety and governance, reflecting the British Government’s commitment to harnessing the benefits of AI while mitigating associated risks.

The Biden Administration’s Executive Order on Safe, Secure and Trustworthy AI

Issued on October 30, 2023, President Biden’s Executive Order aims to establish a comprehensive framework for governing AI development and deployment in the United States. The order emphasises the importance of AI technologies adhering to ethical principles, ensuring privacy, promoting fairness and mitigating potential risks. It calls for collaboration among federal agencies, the private sector, academia and international partners to advance AI innovation while prioritising safety and security. Additionally, the order mandates the establishment of standards and guidelines for AI systems, encourages transparency and accountability in AI processes, and underscores the need for workforce development to address the evolving demands of the AI landscape.

United Nations General Assembly Resolution on Artificial Intelligence

On March 21, the United Nations General Assembly unanimously endorsed the first-ever global resolution on artificial intelligence. This landmark decision encourages all nations to prioritize human rights, personal data protection and the assessment of AI-related risks.

The resolution was initiated by the United States and received backing from China and an additional 120 countries. It was passed by consensus, securing the endorsement of all 193 UN member states. Although the resolution is non-binding and lacks enforcement mechanisms for non-compliance, it represents a significant move towards establishing a global framework for AI governance.

This development underscores the United States’ efforts to lead AI innovation on the international stage. Through this resolution, the Biden administration aims to shape the standards set by global entities focusing on artificial intelligence, positioning itself as a counterbalance to Chinese influence within these organizations.

 

Conclusion

The legislative landscape surrounding AI across the EU, U.S., G7, UN and other international bodies reflects a growing recognition of the need for robust governance frameworks to address the ethical, legal and societal implications of AI technologies.

While each jurisdiction has pursued its own approach, there is a common thread of emphasis on principles such as transparency, accountability, fairness and human rights in AI development and deployment. However, despite significant progress, challenges remain in harmonising diverse regulatory approaches, addressing ethical dilemmas, ensuring global cooperation and fostering innovation.

Moving forward, continued collaboration among stakeholders at the national, regional and international levels will be crucial to navigating the complexities of AI governance effectively. By fostering dialogue, sharing best practices and adapting regulatory frameworks to evolving technological landscapes, policymakers can strive towards achieving a more cohesive, inclusive and human-centric approach to AI governance that maximises the benefits while mitigating risks for societies worldwide.

In the forthcoming paper, we’ll delve into AI governance initiatives in the Global South. While much attention has been paid to AI development and governance in western nations, it’s imperative to shed light on the unique challenges and opportunities faced by countries in the Global South. From navigating resource constraints and technological disparities to addressing socio-economic inequalities and cultural contexts, AI governance initiatives in these regions present a complex landscape ripe for exploration.

The views and opinions in these articles are solely of the authors and do not necessarily reflect those of Teneo. They are offered to stimulate thought and discussion and not as legal, financial, accounting, tax or other professional advice or counsel.

To read more of our insights or for more information

Subscribe to Teneo's Global Newsletter & Insights Series

Please fill in your contact details below to subscribe to Teneo’s weekly Global Newsletter and Insights Series.

Please select region.
Please enter your first name.
Please enter your last name.
Please enter your company name.
Please enter a valid e-mail.
There was an error with your subscription. Please try again.

Thank you!