23.1 C
New Delhi
Friday, May 2, 2025
HomeTechAI hallucination spooks law firms, halts adoption -DellyRanks

AI hallucination spooks law firms, halts adoption -DellyRanks


Law firms are sharing cautionary notes with their employees, conducting training programmes and drafting agreements to assure clients that AI tools will not be used for legal advice or research, according to solicitors representing companies across sectors. Others are pulling back from using OpenAI’s ChatGPT, Google’s Gemini and Microsoft’s Copilot to prepare their courtroom representations. 


“We are currently in the midst of drafting an AI policy. Since we do not yet have a formal policy in place, we are cautious about allowing AI use for drafting any legal documents or advice,” said Probir Roy Chowdhury, partner at JSA Advocates & Solicitors and an adviser to leading technology companies. 

“We anticipated that younger attorneys, who are already using AI tools like ChatGPT or Gemini in their daily lives, would naturally start using them for work,” Chowdhury said. “Hence, partners were instructed to speak to their teams individually about dos and don’ts regarding AI use.” 

Also read | Newly minted US unicorn targets India’s ride-hailing market with AI dashcams

Law firms are seeking to reassure clients as AI tools have been known to generate fictitious results. In March, a civil judge in Karnataka cited three non-existent rulings in a case involving Sammaan Capital and Mantri Developers—the high court called it “disturbing” and referred the matter to the chief justice on suspicion pointing to AI tools like ChatGPT. Similarly, the Bengaluru bench of the Income Tax Appellate Tribunal retracted an order in February that referenced fictitious Supreme Court and Madras High Court verdicts, possibly due to AI-generated inputs by a tax department representative.

Such inaccuracies can potentially damage cases and hurt clients. 

Khaitan & Co. is working on an AI compliance and usage policy for its partners, typically the face of a law firm, to “assure” its clients how and where it uses AI. The firm has its own AI tool, KAI, which was rolled out in 2023. 

“Clients may also refuse consent for their data to be used and stored in any AI tools,” said Rohit Shukla, chief digital officer at Khaitan & Co. “We will release the policy by May.” 

Law firms have to ensure that none of the client data are fed into AI tools that can be accessed by others.

Big Tech cautious  

A senior Delhi-based lawyer, who represents Big Tech firms in India’s courts, said that over the past year, top tech firms have rolled out formal strictures on the use of AI within law firms and during court representations. 

“Meta Platforms is very clear in their communications to us that no part of our legal advisories would involve the usage of AI,” the lawyer said, speaking on the condition of anonymity. Mint has learnt that Google and Amazon, too, have sought disclosures from their lawyers in India about their internal AI usage. 

Read this | Mint Primer | Clicks & growls: Why AI’s hearing the call of the wild

Queries emailed to Meta, Google and Amazon did not elicit responses. 

Other law firms that have not adopted AI-related disclosures will do so in the next four to five months. “Certain clients now expect that service providers declare the level of use of AI in performing the services as they prefer transparency,” said Adil Ladha, partner at Saraf and Partners. 

No blanket ban 

However, law firms have not placed a blanket ban on AI. They will continue to use it for tasks like crunching data and preparing case synopses. 

Cyril Amarchand Mangaldas deploys a mix of AI tools—some aimed at administrative efficiency (like Copilot) and others tailored for legal work (Casemine and Relativity). Chief innovation officer Komal Gupta told Mint the law firm is also “experimenting” with next-generation GenAI tools like Harvey and Lucio.  

Harvey AI and Lucio, a homegrown startup, have AI models trained specifically on data of court cases and landmark legal verdicts in India. These models claim to help law firms build custom AI assistants for legal research, briefs, taxation issues and more.

Rahul Rai, cofounder and partner at competition law specialist Axiom5, is building a custom AI assistant for its associates with Lucio. 

“As a firm, we are actively at the forefront of adopting AI for various tasks to make operations more efficient,” he said. “In various cases, using AI tools efficiently can drastically reduce the time we take to create a first draft of a legal representation. If the prompts are given with the right parameters, most AI tools can offer the right results—saving valuable time for associates in tasks such as due diligence.” 

Also read | Google bets big on India’s smartphone users to catch up in AI race

New York City-based privacy rights advocate Mishi Choudhary, who runs Software Freedom Law Centre in India, also allows AI usage outside core legal work. 

“We don’t allow the usage of AI in writing briefs. They (lawyers) can use tools like Grammarly or research tools that have AI features, but no advisories or briefs are allowed to be generated through commonly known AI tools,” she said. “The language of drafting can be refined, but no research in law is allowed based on these tools.” 

However, Choudhary finds common AI tools “very useful” to “study copyright licensing and policy issues”, review and summarize documents, and for automation of tasks and online discovery. 

AI scepticism  

Still, the adoption of AI remains a contentious topic in legal echelons.  

Justice Bhushan Ramkrishna Gavai, who was recently named the next chief justice of India, said at a conference in Kenya that algorithms can fail the very basis of judicial systems. “The essence of justice often involves ethical considerations, empathy, and contextual understanding—elements that remain beyond the reach of algorithms.”  

Justice Gavai flagged the lack of “human-level discernment” in AI, which can lead to adverse legal consequences. 

And even smaller law firms focusing on specialized areas have turned cautious.  

And read | OpenAI follows Google’s product-marketing playbook to steal search giant’s lunch

“Outputs generated by AI must be independently verified to avoid reliance on inaccurate or incomplete information…” said Krishnava Dutt, managing partner at Argus Partners. “We are also developing an annual training module that will provide comprehensive guidance on the effective and ethical use of various AI technologies across the firm.”  

- Advertisment -

YOU MAY ALSO LIKE..

Our Archieves