Home > Daily-current-affairs

Daily-current-affairs / 23 May 2024

The First International AI Treaty : Daily News Analysis

image

Context-

On May  7, 2024, the Council of Europe (CoE) made a historic move by adopting the first-ever international legally binding treaty on artificial intelligence (AI) during the annual meeting of the Committee of Ministers in Strasbourg. This Framework Convention intersects with AI, human rights, democracy, and the rule of law, aiming to regulate AI systems comprehensively from their design and development to their use and decommissioning. However, the treaty's regulatory approach and normative ambiguities leave critical questions about responsibilities and liabilities unanswered.

Framework Convention on AI

  • Adoption and Scope :The Framework Convention, coordinated over two years by the Committee on Artificial Intelligence (CAI), brought together 46 CoE member states and 11 non-member states, along with representatives from the private sector, civil society, and academia. This treaty is open to non-EU countries and shares similarities with the EU AI Act in its risk-based approach. It aims to regulate AI use in both the public and private sectors, including companies acting on behalf of the public sector.
  • Objectives and Principles : The treaty promotes the responsible use of AI aligned with principles of equality, non-discrimination, privacy, and democratic values. It covers the entire AI lifecycle and specifies two compliance methods for the private sector: adhering to the convention's provisions or taking alternative measures to meet international human rights obligations. The framework aims for flexibility in implementation to accommodate varying technological, sectoral, and socio-economic conditions, requiring risk assessments to determine necessary actions, such as moratoriums or bans. An independent oversight mechanism and remedial measures are also mandated.

Risk-Based Regulation and Normative Ambiguities

  • Challenges in AI Regulation : Regulating AI differs significantly from traditional regulatory areas like market practices or physical infrastructure. AI encompasses a broad range of technologies, including machine learning, computer vision, and neural networks, which are continually evolving. Risk-based regulation seeks to optimize regulatory resources and administrative power by targeting enforcement towards likely harms rather than strictly enforcing rules. This involves risk evaluations, balancing trade-offs between risks and opportunities, and setting thresholds for risk acceptability.
  • Normative Ambiguities : Normative ambiguities arise from differing perspectives on risk tolerance and the interpretive application of normative rules. These ambiguities can hinder the enforcement of compliance, particularly when compounded by the global, transnational nature of AI development and production. The unequal distribution of critical AI resources like data and computing power, the diverse ecosystem of stakeholders, and the dynamic emergence of systemic risks further complicate regulation.

Implementation and Compliance

  • Flexible Implementation : The treaty's flexible implementation approach acknowledges the diverse technological, sectoral, and socio-economic conditions across different regions. This flexibility necessitates continuous risk assessments to address emerging risks and adjust regulatory measures accordingly. However, this approach also raises questions about the consistency and effectiveness of regulation across different jurisdictions.
  • Independent Oversight and Redress : The establishment of an independent oversight mechanism is a crucial component of the treaty. This body is responsible for monitoring compliance, assessing risks, and ensuring that AI systems adhere to the treaty's principles. Additionally, the treaty mandates remedial and redressal measures to address harms caused by AI systems. However, the lack of clear guidelines on the loci of obligations and responsibility beyond normative principles poses challenges for practical implementation and enforcement.

Outstanding Concerns and Unanswered Questions

  • Responsibility and Liability : One of the major concerns with the treaty is its failure to clearly define responsibilities and liabilities. The complex global AI ecosystem, dominated by Big Tech companies, creates differential power and dependency structures. Identifying and specifying the roles of different stakeholders—suppliers, consumers, and intermediaries—within this ecosystem is essential to highlight the nature of liabilities and enforce regulatory measures. However, the current framework leaves these issues unresolved, devolving regulatory innovation to individual signatory countries.
  • Participation and Legitimacy : The legitimacy of the treaty is also questioned due to the limited involvement of countries outside the CoE in the drafting and consultation process. This exclusion can affect the treaty's global uptake and effectiveness. Potential signatories are required to submit declarations of compliance by September 5, 2024, a short turnaround time to draft comprehensive national AI legislations and regulations.

Future Directions and Recommendations

  • Addressing Dynamic Complexity : Future conventions and frameworks need to acknowledge the dynamic complexity of AI-driven systems and ecosystems. AI ecosystems involve interacting systems and components that result in emerging complex behavior, requiring continual adaptation to new interactions and changes in the environment. Effective management and compliance systems must address legal questions around liabilities and define clear responsibilities for different actors in the AI ecosystem.
  • Liability Regimes : A 2019 recommendation by a European Expert Group suggested using a product liability regime, assigning responsibility to the entity best capable of managing an AI-related risk as a single point of entry for litigation. However, the treaty does not specify guidelines for adopting different liability regimes, making it incumbent on signatories to determine the nature of liabilities. This highlights the need for comprehensive and deliberative approaches to understand the social, economic, and legal implications of AI-driven systems for designing effective regulatory measures.

Conclusion

The Council of Europe's adoption of the first international legally binding treaty on AI marks a significant step in AI governance. The Framework Convention seeks to regulate AI comprehensively, promoting responsible use in line with human rights, democracy, and the rule of law. However, its risk-based regulatory approach and normative ambiguities pose challenges in defining clear responsibilities and liabilities. The treaty's flexible implementation strategy and independent oversight mechanism are crucial components, yet the lack of clear guidelines on obligations and liabilities hinders practical applicability and enforcement.

Addressing these challenges requires future conventions to recognize the dynamic complexity of AI ecosystems, clearly define stakeholder roles, and establish robust liability regimes. The international community must engage in comprehensive and deliberative approaches to understand the multifaceted implications of AI and develop effective regulatory frameworks that can adapt to the evolving landscape of AI technologies.

Probable questions for the UPSC MAINS Exam-

  1. Discuss the significance of the Council of Europe's Framework Convention on AI in the context of global AI governance. What are the main challenges associated with its implementation? (10 Marks, 150 Words)
  2. Analyze the risk-based regulatory approach adopted in the Framework Convention on AI. How do normative ambiguities impact the enforcement of AI regulations? (15 Marks, 250 Words)

Source - ORF