Delhi HC Rules Against AI-Generated Fabricated Citations in Legal Petition

Delhi HC dismisses petition citing AI-generated, fabricated content, highlighting risks of “AI hallucinations” and need for human verification in legal filings.

By
TRT Editorial
TRT Editorial is your early-morning voice for the latest headlines. With a sharp eye for current events and a passion for clarity, TRT Editorial delivers concise, engaging...
6 Mins Read

The Delhi High Court (HC) recently dismissed a petition filed in a dispute over delayed possession of flats in Gurugram, citing that the content submitted was entirely fabricated and AI-generated. This case highlights the emerging legal challenges associated with artificial intelligence (AI) and its limitations in producing accurate legal documentation.

The petitioner, GWA, had attempted to support its claims by referencing old judgments, including paragraph 74 of a ruling by Raj Narain v Indira Nehru Gandhi (1972) 3 SCC 850 and a 2008 decision allegedly titled Chitra Narain v DDA. However, careful review revealed that the judgments either did not exist or contained far fewer paragraphs than cited. For instance, the Supreme Court judgment referenced contained only 27 paragraphs, making the quotes in the petition entirely false.

Homebuyers involved in the case submitted an eight-page note highlighting all discrepancies, documenting the AI-generated fabrications and misquotes. The court determined that the petition relied exclusively on AI-generated content without any factual verification. This is believed to be one of the first instances in India where a legal petition was dismissed entirely due to fabricated AI-generated references.

Experts explain that this phenomenon falls under “AI hallucination,” where AI systems produce false, misleading, or fabricated information while presenting it as factual. Such hallucinations occur due to insufficient training data, incorrect model assumptions, or biases in the underlying datasets. In legal contexts, reliance on AI without proper human verification can result in serious procedural errors and risk undermining judicial processes.

The HC’s ruling emphasizes the need for caution when integrating AI into legal research and drafting. While AI tools can assist with data retrieval and document summarization, human oversight is essential to verify sources and maintain accuracy. Legal professionals are now being urged to critically assess AI-generated outputs before submission to courts or clients.

The implications of AI hallucinations extend beyond law. In healthcare, financial trading, and security domains, AI errors can have significant consequences. For example, a model predicting medical conditions could misidentify symptoms, while financial AI tools may flag legitimate transactions as fraudulent. This case demonstrates that the legal sector is not immune to these risks.

Authorities and AI developers suggest measures to mitigate such errors, including narrowing the scope of AI queries, providing precise instructions, using templates, and applying iterative human feedback. Training AI models on verified datasets and integrating fact-checking mechanisms can further reduce the likelihood of hallucinations.

The Delhi HC dismissal also serves as a precedent, signaling that courts will not accept unverified AI-generated content as a substitute for genuine legal citations. Lawyers and law firms are now being reminded of their responsibility to validate all references and not rely solely on AI-generated information.

As AI adoption grows across sectors, this case highlights the importance of combining technological efficiency with rigorous human oversight. While AI can enhance productivity, legal and professional accountability must remain paramount to ensure that automated outputs do not compromise fairness, accuracy, or credibility.

Image source-livelaw.in

Share This Article
Recommended Stories