The world’s first comprehensive AI framework, the EU AI Act (the “AI Act”) is set to bring considerable changes to international arbitration. On 26 November 2024, the CIArb webinar titled ‘Global Hot Topics: The EU AI Act and International Arbitration’, co-organised by CEPANI40 and the CIArb Young Members Group, explored the implications of the Act, prompting close scrutiny particularly around regulatory competition.
Will ‘seat shopping’ become the norm?
With the AI Act having come into force on 1 August 2024, we may start to see ‘seat shopping’ becoming ever more prevalent. Stricter compliance requirements may ‘empower’ parties to leverage more permissive jurisdictions as bargaining tools, potentially creating a two-tier arbitration landscape where the choice of seat could drive behaviour and skew outcomes. For instance, jurisdictions with less stringent AI regulations could attract parties seeking a more ‘favourable’ arbitration environment.
The AI literacy mandate
With just months left until February 2025, arbitration professionals must attain a baseline AI literacy (article 4). Disparities in knowledge could however skew decision-making, allowing more resourceful parties to exploit these gaps. This can ultimately create an uneven playing field on the global stage. Arbitration stakeholders should therefore consider implementing tailored training programmes and resources to ensure their teams and themselves are well-versed in the operation and implications of AI technologies.
Transparency: more than just a buzzword
Transparency is vital. Will arbitration agreements adequately address AI usage from the outset? Will arbitrators be compelled to confront these issues early in the arbitration proceedings?
As counsel increasingly utilise AI for document production, including evidence gathering and predictive analysis, it raises critical questions: do parties or arbitral institutions need to know what AI tools arbitrators are employing? The implications for accountability can be significant.
Furthermore, arbitral institutions must take a proactive role in safeguarding the arbitration process amid these complexities, ensuring that AI integration does not compromise the integrity and fairness of the arbitration proceedings. This may require clear guidelines on how AI can be used effectively and ethically in arbitration contexts.
The Arbitrator’s role as administrator of justice
The AI Act will notably impact arbitrators as administrators of justice (article 6.2 and Annex III, 8 (a)). Article 6.3 of the Act highlights that infringements are identified only if the use of an AI tool materially influences the outcome of the decision. For instance, AI used merely for proofreading might not qualify as material. Nonetheless, this raises the need for careful consideration of the emerging nuances in various scenarios.
Key discussions arose concerning the AI Act’s applicability in three-member tribunals where only one arbitrator is based in Europe. What if the co-arbitrators are European but the presiding arbitrator is based elsewhere? Furthermore, how will the Act handle situations where the arbitration is seated outside the EU but hearings or evidence gathering occur within Europe? Would article 2.1(c) – which applies if AI output is used within Europe – be strictly enforced? These issues complicate our understanding and raise questions about fairness and consistency in arbitration.
Furthermore, article 2.1 appears to establish a territorial applicability when arbitration is seated in Europe; this interpretation is not absolute, however. Article 2.1(g) narrows the focus to parties based in the EU, prompting questions on jurisdiction and enforceability in cross-border arbitration.
Ethical dilemmas
Ethical discussions must remain at the forefront. Article 5 of the AI Act highlights potentially harmful practices, raising concerns over relying on AI to uphold decision-making integrity. While AI can aid proofreading, for instance, could subtle influences on outcomes present risks? Robust safeguards must be established to mitigate bias and misuse, particularly as simultaneous translations become commonplace during hearings, ensuring that content remains unaffected by AI-generated interpretations. This topic was also discussed during the webinar, underscoring its current relevance.
Cross-Border implications
Cross-border implications introduce significant challenges. Awards might face validity issues due to varying perceptions of AI across jurisdictions. As enforcement dynamics shift, ensuring that awards remain enforceable – especially when the enforcement location is uncertain – becomes critical. This evolving landscape raises concerns about unintentionally stifling technological advancement while innovation thrives in other sectors.
Conclusion
As we navigate the evolving landscape shaped by the EU AI Act, we must consider whether we are adequately prepared. The implications of the Act for fairness, transparency, and accountability in arbitration are profound. It is essential for practitioners, arbitrators, and stakeholders alike to engage in meaningful dialogue, build resilient frameworks, and ensure that the future of arbitration is both technologically advanced and ethically sound.
For further insights, access here the details of the webinar.