NextFin

Human-Machine Oversight Key to Tackling AI Hallucinations and Decision Black Boxes, Guotai Junan Securities Expert Says

Summarized by NextFin AI
  • Human-in-the-loop mechanisms are crucial for addressing AI hallucinations and opaque decision-making, as stated by Zhan Tingting from Guotai Junan Securities.
  • The 2025 ITValue Summit highlighted the top ten challenges enterprises face in AI deployment, emphasizing the need for robust governance.
  • Zhan proposed a four-pronged approach to tackle AI issues, including modular AI frameworks, data reliability, comprehensive safeguards, and real-time monitoring.
  • She concluded that combining automated AI processes with human supervision is essential for responsible AI deployment.

AsianFin — Human-in-the-loop mechanisms are essential for overcoming AI hallucinations and opaque decision-making, according to Zhan Tingting, assistant general manager of Technology R&D at Guotai Junan Securities.

Her remarks came during the 2025 ITValue Summit, which took place in Sanya and was co-hosted by TMTPost Group and ITValue under the theme “Truth in AI Implementation.”

The summit focused on the top ten challenges enterprises face when deploying AI in real-world scenarios. Zhan outlined a four-pronged approach her team has successfully used to address hallucinations and decision black boxes in long-term AI practice.

First, she emphasized the importance of maintaining controllable atomic AI capabilities through a modular “1+n” large model framework. Second, she stressed ensuring data-level reliability by building both large and small models to guarantee trustworthy outputs. Third, Zhan highlighted comprehensive safeguards across computing power, algorithms, data, platforms, and applications to strengthen AI deployment and management systems. Finally, she advocated real-time monitoring of AI operations, anomaly detection, automated intervention, and human-in-the-loop auditing to ensure effective oversight and control.

Zhan concluded that combining automated AI processes with human supervision is critical for enterprises seeking to deploy intelligent systems responsibly, marking a growing recognition that AI’s transformative potential requires robust governance to be safely realized.

Explore more exclusive insights at nextfin.ai.

Insights

What are AI hallucinations and how do they impact decision-making?

How does the '1+n' large model framework work in AI applications?

What are the key challenges enterprises face when deploying AI in real-world scenarios?

What role does human oversight play in mitigating AI decision black boxes?

How do modular AI capabilities contribute to controllable AI systems?

What methods can be employed to ensure data-level reliability in AI models?

What comprehensive safeguards are necessary for effective AI deployment?

How can real-time monitoring enhance the governance of AI systems?

What are the latest trends in AI oversight and governance practices?

What were the main insights shared during the 2025 ITValue Summit?

How might the combination of automated AI processes and human supervision evolve in the future?

What are the potential risks associated with unchecked AI deployment?

How does the integration of human-in-the-loop mechanisms affect AI performance?

What are some historical examples of AI failures due to lack of oversight?

How do different industries approach the challenge of AI hallucinations?

What advancements in technology could improve AI decision-making transparency?

How can enterprises balance the benefits of AI with the need for robust governance?

What implications do AI hallucinations have for regulatory policies?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App