Have any question ? +44 2030 2627 92

ISSN: 2755-6190 | Open Access

Open Access Journal of Artificial Intelligence and Technology

Volume : 1 Issue : 2

Causal Guard: A Smart System for Detecting and Preventing False Information in Large Language Models

Piyush Kumar Patel

ABSTRACT
While large language models have transformed how we interact with AI systems, they have a critical weakness: they confidently state false in- formation that sounds entirely plausible. This ”hallucination” problem has become a major barrier to using these models where accuracy matters most. Existing solutions either require retraining the entire model, add significant computational costs, or miss the root causes of why these hallucinations occur in the first place. We present Causal Guard, a new approach that combines causal reasoning with symbolic logic to catch and prevent hallucinations as they happen. Unlike previous methods that only check outputs after generation, our system understands the causal chain that leads to false statements and intervenes early in the process.

Causal Guard works through two complementary paths: one that traces causal relationships between what the model knows and what it generates, and an- other that checks logical consistency using auto- mated reasoning. Testing across twelve different benchmarks, we found that Causal Guard correctly identifies hallucinations 89.3% of the time while missing only 8.3% of actual hallucinations. More importantly, it reduces false claims by nearly 80% while keeping responses natural and helpful. The system performs especially well on complex reasoning tasks where multiple steps of logic are required. Because Causal Guard shows its reasoning process, it works well in sensitive areas like medical diagnosis or financial analysis where understanding why a decision was made matters as much as the decision itself.

 

JOURNAL INDEXING