A recent paper The Reversal Curse points out an apparent failure in large large language models like GPT-4. From the abstract: We expose a surprising failure of generalization in auto-regressive large language models (LLMs). If a model is trained on a sentence of the form “A is B”, it will not automatically generalize to the reverse direction…
Research reveals that generative AI is weak at deductive logic and suffers from a Reverse Curse. I take a close look and show how prompting can help deal with the malady.