Conference Paper

Learning While Searching in Constraint-Satisfaction-Problems.

Conference: Proceedings of the 5th National Conference on Artificial Intelligence. Philadelphia, PA, August 11-15, 1986. Volume 1: Science.
Source: DBLP

ABSTRACT

The popular use of backtracking as a control strategy for theorem proving in PROLOG and in Truth-Maintenance- Systems (TMS) led to increased interest in various schemes for enhancing the efficiency of backtrack search. Researchers have referred to these enhancement schemes by the names ' 'Intelligent Backtracking' ' (in PROLOG), ' 'Dependency- directed-backtracking" (in TMS) and others. Those improve- ments center on the issue of "jumping-back" to the source of the problem in front of dead-end situations. This paper examines another issue (much less explored) which arises in dead-ends. Specifically, we concen- trate on the idea of constraint recording, namely, analyzing and storing the reasons for the dead-ends, and using them to guide future decisions, so that the same conflicts will not arise again. We view constraint recording as a process of learning, and examine several possible learning schemes studying the tradeoffs between the amount of learning and the improve- ment in search efficiency.

Full-text

Available from: Rina Dechter, May 21, 2015
    • "This leads to an effective control of combinatorial search which is called dependency-directed backtracking. This method is used in truth maintenance systems [16,48], and has been improved or simplified in the context of logic programming and constraint satisfaction problems by various researchers ([10, 14, 15]). As the learnt clauses database is of exponential size, several strategies have been proposed to reduce such spatial complexity. "
    [Show abstract] [Hide abstract] ABSTRACT: Learning is a general concept, playing an important role in many Artificial intelligence domains. In this paper, we address the learning paradigm used to explain failures or conflicts encountered during search. This explanation, derived by conflict analysis, and generally expressed as a new constraint, is usually used to dynamically avoid future occurrences of similar situations. Before focusing on clause learning in Boolean satisfiability (SAT), we first overview some important works on this powerful reasoning tool in other domains such as constraint satisfaction and truth maintenance systems. Then, we present a comprehensive survey of the most important works having led to what is called today—conflict driven clause learning—which is one of the key components of modern SAT solvers. We also overview some of the original extensions or variants of clauses learning. In theory, current SAT solvers with clause learning are as powerful as general resolution proof systems. In practice, real-world SAT instances with millions of variables and clauses are now in the scope of this solving paradigm.
    No preview · Article · Oct 2015 · Annals of Operations Research
  • Source
    • "Roughly speaking, the former reduces the amount of nodes visited during the traversal while the latter makes the traversals less frequent. Although nogood recording [11] is an old technique for pruning the search space during search, it has not been applied much to consistency algorithms. "
    [Show abstract] [Hide abstract] ABSTRACT: Binary decision diagrams (BDDs) can compactly rep- resent ad-hoc n-ary Boolean constraints. However, there is no gen- eralized arc consistency (GAC) algorithm which exploit BDDs. For example, the global case constraint by SICStus Prolog for ad-hoc constraints is designed for non-Boolean domains. In this paper, we introduce a new GAC algorithm, bddc, for BDD constraints. Our empirical results demonstrate the advantages of a new BDD-based global constraint - bddc is more efficient both in terms of mem- ory and time than the case constraint when dealing with ad-hoc Boolean constraints. This becomes important as the size of the ad- hoc constraints becomes large.
    Full-text · Conference Paper · Jan 2006
  • Source
    • "As a second example, we have studied the computational complexity of size and relevance bounded nogood learning. A related, yet different problem, the number of minimal nogoods, is addressed in (Dechter 1986). "
    [Show abstract] [Hide abstract] ABSTRACT: We study the computational complexity of reasoning with global constraints. We show that reasoning with such constraints is intractable in general. We then demonstrate how the same tools of computational complexity can be used in the design and analysis of specific global constraints. In particular, we illustrate how computational complexity can be used to determine when a lesser level of local consistency should be enforced, when decomposing constraints will lose pruning, and when combining constraints is tractable. We also show how the same tools can be used to study symmetry breaking, meta-constraints like the cardinality constraint, and learning nogoods.
    Full-text · Article · Jun 2004
Show more