Quantified formulas are difficult for Satisfiability Modulo Theories (SMT) solvers due to their undecidability, making efficient instantiation techniques crucial. Traditional methods like e-matching, syntax-guided, model-based, conflict-based, and enumerative instantiations each have strengths and often complement one another. The paper proposes a new approach that dynamically learns from these existing instantiations during the solving process. By viewing observed instantiations as samples from a latent language, the method employs probabilistic context-free grammars to generate new terms similar to successful past instantiations. This probabilistic modeling allows the solver to mimic effective instantiations, improving efficiency. Additionally, the approach can invert learned term probabilities to encourage exploration of diverse instantiations, preventing the solver from getting stuck in local optima. Balancing exploitation of known good instantiations with exploration of new terms aims to enhance quantifier reasoning performance. This dynamic learning and generation framework could lead to more robust and adaptable SMT solving strategies. The method’s ability to integrate multiple instantiation techniques and learn from them represents a significant advancement in handling quantified formulas. Future work may explore further refinements and empirical validation of this approach.
👉 Pročitaj original: arXiv AI Papers