LLM Enhancement with Domain Expert Mental Model to Reduce LLM Hallucination with Causal Prompt Engineering

Source: arXiv AI Papers

Difficult decision-making problems exist across various fields, and the integration of large language models (LLMs) aims to streamline this process. However, LLMs are hampered by issues such as missing training data, which can lead to hallucinations and inaccuracies. Retrieval-Augmented Generation (RAG) attempts to mitigate these problems by incorporating external information retrieval, thereby enhancing the reliability of LLM outputs. While RAG improves accuracy, it is not a complete solution due to potential limitations in accessing all necessary information.

To advance decision-making further, the paper proposes a technology based on optimized human-machine dialogue that seeks to create a computationally tractable personal expert mental model (EMM). The EMM algorithm for LLM prompt engineering includes four crucial steps: factor identification, hierarchical structuring of those factors, generate a generalized model specification, and finally forming a detailed expert mental model. As tasks often involve critical information gaps, the proposed method aims to address these challenges while facilitating more nuanced decision-making frameworks that leverage the strengths of both LLMs and expert insights.

👉 Pročitaj original: arXiv AI Papers