NextFin News - The highest echelons of the Indian judiciary have drawn a definitive line in the sand regarding the integration of generative technology into the nation’s courtrooms. Speaking at a high-level judicial conference this weekend, Chief Justice of India Surya Kant declared that while artificial intelligence serves as a potent catalyst for efficiency, it must never be permitted to "encroach" upon the core human responsibility of delivering judgments. The Chief Justice’s remarks, delivered alongside Karnataka High Court Chief Justice Vibhu Bakhru, signal a cautious pivot in India’s legal modernization strategy—one that prioritizes the "human-in-the-loop" principle over the allure of fully automated adjudication.
The timing of this intervention is critical. India’s judicial system is currently grappling with a backlog of over 50 million cases, a figure that has historically made the promise of AI-driven speed almost irresistible. However, Kant argued that the "soul of justice" resides in empathy and contextual understanding, traits that remain fundamentally beyond the reach of algorithmic logic. According to Rediff, the Chief Justice emphasized that AI should be viewed as a tool to strengthen the judiciary’s administrative and research capabilities rather than a replacement for the nuanced deliberation required in complex legal disputes. This stance reflects a growing global anxiety among jurists that "black box" algorithms could inadvertently bake bias into the legal system or erode the transparency of the "reasoned order," a cornerstone of common law.
Chief Justice Vibhu Bakhru of the Karnataka High Court echoed these concerns, specifically questioning the long-term trajectory of AI’s role in the courtroom. Bakhru raised the specter of a "creeping dependency" where the convenience of AI-generated summaries or draft opinions might eventually dull the critical faculties of the bench. The Karnataka High Court has been at the forefront of India’s digital transformation, yet Bakhru’s skepticism highlights a shift from the "tech-first" optimism of the early 2020s toward a more defensive posture. The debate is no longer about whether AI will be used—it already is, through tools like SUVAS for translation and SUPACE for data processing—but rather where the "kill switch" for automation should be located.
The institutional response to these challenges is already taking shape. Kant recently reconstituted the Supreme Court’s Artificial Intelligence Committee, appointing Justice Suraj Govindaraj of the Karnataka High Court to a key role. This committee is tasked with drafting the ethical guardrails that will govern AI deployment across the subordinate judiciary. The inclusion of Govindaraj is a strategic move; he is widely regarded as one of India’s most tech-savvy judges, having pioneered paperless courts in Bengaluru. His presence suggests that the Supreme Court is looking for a middle path—leveraging AI to analyze case delays and propose administrative solutions while strictly cordoning off the actual decision-making process from machine interference.
For the legal tech industry and the broader economy, this judicial skepticism creates a complex landscape. While there is a massive market for AI tools that can automate document review and legal research, the "no-go zone" for judgment drafting limits the total addressable market for more ambitious "robot judge" startups. Furthermore, the emphasis on human oversight means that the demand for skilled legal professionals will not diminish; instead, the skill set will shift toward "algorithmic literacy." Lawyers will increasingly be required to audit the AI tools they use, ensuring that the precedents cited by a machine are not "hallucinations"—a phenomenon that has already led to sanctions for attorneys in the United States.
The broader implication of the Kant-Bakhru dialogue is a reaffirmation of the judiciary as a bastion of human agency in an increasingly automated world. By insisting that AI remain a "support system," the Indian leadership is attempting to insulate the rule of law from the volatility of rapid technological shifts. This approach may slow the pace of case clearance in the short term, but it aims to preserve the legitimacy of the institution in the long run. As the Supreme Court’s AI Committee begins its new mandate, the focus will likely turn to "explainable AI"—systems that can not only provide an output but also detail the specific legal logic and precedents used to reach it, thereby allowing a human judge to verify every step of the process.
Explore more exclusive insights at nextfin.ai.

