Deep Learning for approximating solutions to hard computational problems indirectly
Sun 20.04 12:30 - 13:00
- Graduate Student Seminar
-
Bloomfield 424
Abstract: Computational reduction serves as a powerful tool in optimization by enabling the translation between different computational problems while preserving key structural and computational properties. This allows one problem to be reformulated in terms of another, often facilitating the solution of otherwise complex problems. Since reductions are mostly manual, it is plausible that alternative, better reductions exist. Furthermore, for many pairs of problems no one published a reduction, and such reductions may lead to insights and new solution strategies. Typically, computational problems are either solved through tailored algorithms or by reducing them to constraint satisfaction problems (CSP), SAT, or SMT, leveraging existing solvers. However, the manual, expert-driven nature of these reductions raises the question of whether an automated, general framework for finding reductions could be developed. The emergence of Neural Combinatorial Optimization (NCO), which applies neural networks to combinatorial problems, offers a promising avenue for automating the reduction process. This work focuses on imperfect reductions - those without guaranteed effectiveness but with empirical evidence of their value. It limits its scope to reductions between optimization problems, recognizing that such reductions can serve as part of algorithmic frameworks that are evaluated based on their performance. Through experimentation, we find that learning neural reduction optimization is challenging, often leading to unstable training and inconsistent performance. However, despite these difficulties, our approach successfully recovers known natural reductions, suggesting that further refinements could improve the reliability of neural reduction discovery.
