AI in Arbitration: Acceleration or a New Zone of Vulnerability?
Artificial intelligence is no longer a speculative topic in international arbitration and litigation. It is already being used, quietly but increasingly, across multiple stages of the process: from document review and data extraction, to the preparation of draft pleadings, AI tools are beginning to reshape how parties approach dispute resolution.
The shift towards extensive AI use has been rapid. Recent industry surveys indicate that more than 60% of legal professionals in the UK are already using or actively experimenting with generative AI in their workflows, particularly in document-heavy processes (see, for example, reports by Thomson Reuters and LexisNexis).
What began as an efficiency tool is quickly becoming embedded in core legal workflows. The question is no longer whether AI will be used, but rather how its use reshapes (if at all) the evidentiary process in complex disputes.
Where Does AI Add Value?
In an ideal world, AI delivers exactly what complex disputes demand: speed, scale, and consistency.
Large datasets that would take weeks to process can now potentially be structured and analysed in hours. Pattern recognition across thousands of documents does no longer seem to be constrained by human capacity. In some internal benchmarking exercises within law firms, AI-assisted review has allegedly reduced document processing time by up to 50–70%, depending on the complexity of the dataset.
For valuation and damages expert work, these capabilities are also particularly relevant. Valuation models, damages calculations, and financial analyses depend on integrating large and heterogeneous datasets in certain cases. If configured appropriately, AI can assist in organising inputs, identifying anomalies, and ensuring consistency across various assumptions.
This is not a marginal improvement, as in complex and data-heavy disputes AI may become part of the analytical infrastructure.

Source: 2025 Generative AI in Professional Services Report, Thomson Reuters.
A Different Standard of Use?
The same features that make AI powerful require a different standard of use. The risk does not arise from the technology itself, but from how its inputs and outputs are interpreted, validated, and relied upon by counsel, experts, and, ultimately, tribunals and courts.
Generative AI models may operate through statistical pattern recognition rather than necessarily an internal understanding of legal or economic reality. Their outputs may embed assumptions that are not immediately visible.
Courts have already encountered high-profile failures in respect of reliance on AI outputs. In 2023, a U.S. court case involved the submission of AI-generated legal citations that did not exist, prompting sanctions against the lawyers submitting these materials.
In quantum aspects of arbitration, the risks of overreliance on AI may be less visible but more subtle. Errors will not necessarily appear as obvious mathematical mistakes. They are more likely to manifest as changes in assumptions, weighting, or interpretation – precisely the areas where expert judgment is most critical.
The Problem of Explainability
The AI related issues above lead to a central tension: explainability. Arbitral proceedings rely on the ability to test evidence, whether factual or expert one. Expert opinions are scrutinised through cross-examination, where assumptions, methodologies, and reasoning are examined in detail.
If an expert relies on an AI-assisted analysis, several questions arise:
- Can the expert fully reconstruct how a conclusion was reached?
- Can the opposing party meaningfully test that reasoning?
- Can the tribunal assess reliability where reasoning is only partially transparent?
These questions do not appear to be purely theoretical any longer. Regulatory discussions in legal and policy circles, including those involving European Commission AI governance initiatives, increasingly focus on transparency and accountability in AI-assisted decision-making.
AI and the Role of the Expert
Despite all of the benefits of AI, the latter does not reduce the need for expertise, but makes it more visible.
As analytical processes become partially automated, the differentiating factor of many expert services shifts. It is no longer the ability to produce outputs, but the ability to interpret, validate, and defend them (e.g. under scrutiny in dispute proceedings).
In this sense, AI should not be understood as a substitute for expert work, but as a layer that restructures it. Properly deployed, it allows experts to move away from routine processing and focus on higher-order judgment: identifying key concepts, challenging assumptions, and constructing defensible commercial interpretations.
A New Zone of Vulnerability?
The most significant impact of AI is not where it succeeds, but where it affects the weakest point of the process.
One of the vulnerabilities of damages expert evidence is in calculations, i.e. errors that could be tested and identified. With AI, that vulnerability may move from mechanics to professional judgment, from calculation to interpretation.
The risk is no longer in producing incorrect outputs as such, but the reliance on outputs that are not fully understood, commercially unsound, or whose assumptions are not fully transparent.
AI Adoption Is Accelerating Faster Than Regulation?

Source: Generative AI legal survey H2 2025, Lexis Nexis.
The speed of AI adoption in the legal and related services matters. AI is being integrated into workflows faster than formal standards and procedural rules are evolving. This potentially creates a gap between:
- capability and control
- output and explainability
- efficiency and accountability
Implications for Arbitral Practice
These developments raise practical questions:
- Should AI use in arbitration and litigation be disclosed?
- If so, at what level of detail?
- Can tribunals meaningfully assess AI-assisted reasoning?
Discussions within the arbitration community, including initiatives associated with the International Bar Association, suggest that guidance will emerge at some point. But for now, practice seems to be moving ahead of regulation.
Conclusion: Integration, Not Substitution
AI is becoming almost unavoidable in arbitration and litigation, especially in document heavy matters. But its value depends entirely on how it is used.
If generative AI models are treated as a replacement for professional judgment, it introduces a myriad of risks. On the other hand, if AI is integrated within a structured legal and expert framework, it enhances it.
The challenge seems to be not in adopting the AI as such, but in the disciplined integration of the latter into the already tested framework of international dispute resolution.
Which leads to a final question. If AI-assisted analysis becomes standard, what distinguishes counsel drafted pleadings and robust expert evidence from automated output?