Hierarchical Conformal Classification

Source: arXiv AI Papers

Conformal prediction (CP) is a robust framework that provides uncertainty quantification for machine learning classifiers by generating prediction sets guaranteed to contain the true label with high probability. However, traditional CP methods treat class labels as flat and unstructured, ignoring valuable domain knowledge such as semantic relationships or hierarchical class structures. The paper introduces Hierarchical Conformal Classification (HCC), which integrates hierarchical class structures into the conformal prediction framework, allowing prediction sets to include nodes at various levels of the hierarchy while preserving coverage guarantees. HCC is formulated as a constrained optimization problem, and the authors demonstrate that a reduced, well-structured subset of candidate solutions is sufficient to ensure both coverage and optimality, addressing the combinatorial complexity of the problem. Empirical evaluations on new benchmarks involving audio, image, and text data show that HCC provides advantages over flat CP methods, including more informative and semantically meaningful prediction sets. Additionally, a user study reveals that annotators significantly prefer hierarchical prediction sets, suggesting improved interpretability and usability. The integration of hierarchical information into CP can enhance decision-making processes in applications where class relationships matter. This approach mitigates risks associated with ignoring class structure, such as less informative predictions or reduced user trust. Future work could explore further optimization techniques or applications in other domains with complex label hierarchies. Overall, HCC represents a significant advancement in uncertainty quantification by combining rigorous statistical guarantees with structured domain knowledge.

👉 Pročitaj original: arXiv AI Papers