Not published just yet are experiments for finding solutions to mathematical problems traditionally found with SAT solvers, at much larger scale than was previously possible.
Um, okay? Isn't that how most optimisation involving AI is supposed to go?
Perhaps some much needed context. Mathematicians are not stupid; we are very much aware of all the existing forms of genetic optimisation algorithms, cross entropy method, etc. Nothing works on these problems, or at least not well at scale. As I said, state of the art for many of these was SAT-related. The problem is that the heuristic used for exploring new solutions always required very careful consideration as the naive ones rarely worked well.
Here, the transformer is proving effective at searching for good heuristics, far more so than any other existing technique. In this sense, it is achieving far, far. far better performance in optimisation than previous approaches. That is a breakthrough, at least for us mathematicians. If this doesn't constitute improvements in optimisation, I don't know what does.
Saying it's "just meta heuristic relying on local search" is akin to saying these tasks are "just optimisation". If it's so procedural, why weren't we making ground on these things before?
Also, by the way, a :facepalm: is not exactly the pinnacle of academic rebuttal, no matter how wrong I could have been.
It’s just that the paper cited is no different than any other paper in the meta-heuristic community.
Some idea for guiding the local search. Some limited sample results. No promises on bounds or generalizability of the method.
If this is ground breaking, then every legitimate meta heuristic paper in the past 50 years was also ground breaking.
I will change my mind if I see a wide set of benchmark results where it consistently beats or is even head-to-head with the SoTA. Then we would know that we have a game changer.
So for reference, I am a mathematician, I work in probability theory. I don't work in optimisation theory, but saw a talk a few days ago by a prominent member in that field ranting about the obsession with pointless bounds when industry does not care, Gurobi team does not care, and for OR, NN-centric heuristics coupled with massive GPU compute are eating the lunch of academia. But I digress.
The key difference is that this one approach (and the improvements that follow up on it) seems to be generating counterexamples and sharper estimates at such a rapid speed that they simply can't publish them fast enough. It almost seems pointless to do so given how easy they now are to find. Mathematicians are not stupid, we know about meta-heuristics. It takes forever to get a single result with existing techniques, so long that each result is often its own paper. Getting everything in the PatternBoost paper took a few months, as I understand. Local search is problem-specific, but the point is that the NN is doing a lot of heavy-lifting so the choice of local search is not as critical.
Here is a frivolous example mentioned in the paper itself: largest subset of a 2D lattice with no embedded isosceles triangle. For larger lattices (I don't remember the exact size under consideration), SoTA are SAT solvers which work up to n=32. Previous meta-heuristics cannot even do this, of course, you can try if you don't believe me. PatternBoost has just recently gotten to ~96% of the expected size for n=64 and is still improving by the day. Once it reaches 100%, there are techniques to show local optimality. Does this work as a benchmark? There are plenty more in there and unpublished.
For more serious cases that are difficult to explain here, the group in Sydney have counterexamples to hundreds of graph-theory and rep. theory conjectures that have stood for many decades. I also disagree on the "no different" aspect; Geordie Williamson is a very strong mathematician, and does not tend to jump on trivial things. He is very receptive to discussion on these matters, so you can ask him yourself how this is actually a game-changer downstream.
Yes, it is a meta-heuristic. But almost all meta-heuristics have been useless so far for these problems. This one is not, and for the people downstream, that's really all that matters.
> No evidence so far that "AI" has improved our general optimization capabilities. At all.
Uh, ok, I didn't claim that. At All.
Deep Learning machine learned features have definitely helped us (meaning my company) over hand engineered features, allowing us to navigate our problem space significantly faster
Still at the top of the benchmarks of integer optimization by huge margin are the traditional usual suspects. Same in constraint programming and SAT.