Assessing the Role of Human Judgment in Hybrid Auditing Environments Combining AI and Manual Analysis
Abstract
This research investigates the complex interplay between artificial intelligence systems and human
judgment in contemporary hybrid auditing environments. As organizations increasingly adopt AIpowered auditing tools while maintaining traditional manual analysis, understanding how these two
approaches complement and potentially conflict becomes crucial for audit quality and effectiveness. Our
study employs a novel methodological framework combining experimental simulations with qualitative
analysis of auditor decision-making processes across 15 financial institutions. We developed a unique
assessment protocol that measures judgment calibration, cognitive bias mitigation, and decision confidence in scenarios where AI recommendations either align with or contradict human intuition. The
findings reveal several counterintuitive patterns: human auditors demonstrated superior judgment in
detecting novel fraud patterns that fell outside AI training datasets, while AI systems excelled at identifying subtle statistical anomalies across large transaction volumes. However, the most significant finding
concerns the ’validation paradox’—auditors showed decreased scrutiny of AI-generated findings when
they aligned with initial hypotheses, potentially creating new blind spots. Our research contributes to
the emerging literature on human-AI collaboration in professional settings by proposing a dynamic calibration model that optimizes the allocation of auditing tasks between human and artificial intelligence
based on problem characteristics, data quality, and risk assessment. This study addresses a critical
gap in understanding how professional judgment evolves in increasingly automated environments and
provides practical frameworks for organizations seeking to implement hybrid auditing systems without
compromising audit quality or professional skepticism.