Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My comment got eaten by HN, but I think LLMs should be used as the glue between logic systems like prolog, with inductive, deductive and abductive reasoning being handed off to a tool. LLMs are great at pattern matching, but forcing them to reason seems like an out of envelope use.

Prolog would be how I would solve puzzles like that as well. It is like calling someone weak for using a spreadsheet or a calculator.

Abductive Commonsense Reasoning Exploiting Mutually Exclusive Explanations https://arxiv.org/abs/2305.14618



I actually coincidentally tried this yesterday on variants of the "surgeon can't operate on boy" puzzle. It didn't help, LLMs still can't reliably solve it.

(All current commercial LLMs are badly overfit on this puzzle, so if you try changing parts of it they'll get stuck and try to give the original answer in ways that don't make sense.)


What do you mean by you tried it?


Generated some Prolog programs and looked at them and they were wrong.

Specifically, it usually decides it knows what the answer is (and gets it wrong), then optimizes out the part of the program that does anything.


I've been saying this ever since GPT 3 came out and I started toying with it.

It's unfortunate that for all the people who work in AI most of them barely even know what Prolog is.


It seems quite logical to me as well. An LLM is not a logical computing system but it has the knowledge on how to do a multiplication




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: