AI in regulatory drafting: A promise of efficiency and a peril of inaccuracy

Author

Renny Reyes

Read time

4 min

Status

Published 18 Aug, 2025

AI in regulatory drafting: A promise of efficiency and a peril of inaccuracy

The rise of Artificial Intelligence (AI) is no longer a futuristic fantasy; it’s a present-day reality that is reshaping industries, and the legal and regulatory field is no exception. A recent conversation with a colleague sparked a crucial question that many of us are grappling with: to what extent will AI replace humans in complex tasks like regulatory drafting? While the initial reaction might be to dismiss the idea of a machine drafting our laws, a closer look reveals a more nuanced picture, one that is both promising and full of challenges.

The inevitable rise of AI in regulatory drafting

The notion that AI will not replace humans in regulatory drafting, at least in the short term, is a comforting one. However, evidence suggests that we are on the verge of a significant shift. It is not a matter of if but when AI will start to take on a more substantial role. The potential for AI to analyze large amounts of data, identify patterns, and even generate draft clauses is undeniable. This could lead to a future where humans and AI work in tandem, with AI handling the more routine aspects of drafting, while humans provide the critical thinking, ethical considerations, and multi-contextual understanding that are, for now, beyond the reach of machines.

The accuracy hurdle

However, before we can fully embrace AI in such a sensitive field, we must confront its most significant limitation: accuracy. The current generation of AI models, while impressive in their ability to generate human-like text, are prone to inaccuracies. They can “hallucinate” information, creatively mix and match data from different sources, and present it as “accurate” with a confidence that can be dangerously misleading. This is not just a technical glitch; it’s a fundamental problem that stands in the way of responsible AI adoption.

The language barrier

This accuracy problem is further compounded when we move beyond the English-speaking world. AI models are trained on large datasets of text and code, and the majority of this data is in English. This means that AI is often less accurate and reliable when working in other languages, where it has fewer “sources for inspiration.” This creates a digital divide, where the benefits of AI are not equally distributed, and the risks are disproportionately higher for non-English speakers.

The path forward: A two-fold approach

So, what is the solution? It’s not a single fix, but a two-fold path that must be pursued with rigor: meticulous training and ensuring a human is always in the loop.

First, we must recognize the absolute necessity of training. For AI to be a reliable tool in a field as critical as regulatory drafting, it cannot be a one-size-fits-all solution. It must be meticulously trained on specific, relevant legal documents and continuously corrected by domain experts. This is not just a recommendation; it is an obligation. We have a responsibility to ensure that any AI system used in such a capacity is accurate, unbiased, and aligned with our legal and ethical principles.

Second, and equally crucial, is establishing the non-negotiable role of the human in the loop. AI can be a powerful assistant for research and drafting, but it cannot be the final judge. Human oversight must be a permanent fixture in the process, providing an enriched multi-contextual understanding, ethical judgment, and nuanced interpretation that machines currently lack. The final decision and, critically, the accountability for that decision, must always rest with a human expert. This isn’t a temporary measure while the technology matures; dare I say, it’s a fundamental safeguard for maintaining the integrity and responsibility of our regulatory systems.

Conclusion

The journey towards integrating AI into regulation is just beginning. While the potential for increased efficiency is tantalizing, we must proceed with caution. The promise of AI can only be realized if we address its current limitations head-on. By investing in training, promoting transparency, and fostering a culture of critical evaluation, we can harness the power of AI to improve our legal systems, rather than undermine them. The future of regulation will likely be a collaborative one, but it is a future that we must build with a clear-eyed understanding of both the potential and the perils of this transformative technology.

Renny Reyes

Dr. Renny Reyes

PAARS Founder

With over 15 years of experience in regulatory policy and governance, I’m passionate about making regulatory frameworks and regulations more effective, transparent, and aligned with real-world needs, whether through hands-on reform or thoughtful reflection in academic work.

Looking to improve your regulatory framework?

Let’s work together to bridge the gap between strategy and implementation.