How Self-Critique Improves Logic and Reasoning in LLMs Like ChatGPT

One of the most impactful prompting techniques you can use is any method of self-critique. In this lesson, we decouple this from the most familiar promoting strategies and zoom in on this technique.

How Self-Critique Improves Logic and Reasoning in LLMs Like ChatGPT

Recent advances in large language models (LLMs) like GPT-3 have demonstrated their impressive capabilities. However, these models still make illogical errors and can benefit from self-critique - the ability to reflect on and improve their own outputs. Implementing effective self-critique in LLMs could make them more robust and trustworthy.

Integral Role in Advanced Prompt Engineering

The Self-Critique or Self-Reflection phase is not just a standalone feature but a foundational element in many advanced prompt engineering techniques.

Techniques such as "chaining," where answers are built upon sequentially to improve coherence; "tree-of-thought," which creates a structured, branching approach to thinking; and "relexion," where answers are revisited and refined multiple times, all leverage the self-reflection process.

By rigorously examining and re-evaluating its outputs, the LLM ensures that the answers generated are not just fast, but also precise and dependable. This introspective phase, therefore, forms a vital cornerstone for producing LLM output that users can trust and be confident in.

How Self-Critique Works

  • The LLM generates multiple ideas in response to a prompt
  • It reviews the ideas to identify potential flaws in reasoning
  • The model attempts to resolve those flaws and improve the response
  • This technique mimics human iterative thinking

Self-critique works by having the model critique its initial outputs before returning the final response. The LLM generates multiple ideas, reviews them to identify flaws, then attempts to resolve those flaws. This technique forces deeper analysis, surfaces mistaken assumptions, and allows the exploration of alternative perspectives. The multi-step approach mimics the human iterative thinking process.

Discreteness in the Self-Critique Process

For the Self-Critique or Self-Reflection phase to be most effective, it must be treated as a separate and distinct step in the LLM's processing. Integrating it within the same prompt that generates the initial response can dilute the quality and accuracy of both the answer and its critique.

By ensuring it stands alone, the LLM can focus exclusively on evaluating and refining its initial outputs, which leads to more precise and reliable final answers.

Benefits of Self-Critique

Enhances Logical Reasoning

  • Allows LLMs to critique assumptions and consider constraints
  • Reduces illogical conclusions

Self-critique enhances logical reasoning in LLMs. When given a prompt requiring deductive logic, the LLM can critique whether its initial responses make invalid assumptions or ignore key constraints. The critique stage points out these fallacies so the LLM can correct them during the resolve phase. This reduces illogical conclusions.

Improves Consistency

  • With explicit self-reflection, models avoid contradictions

Additionally, self-critique improves consistency over multiple responses. Without explicit self-reflection, LLMs can contradict themselves given different phrasings of the same query. By maintaining awareness of its previous outputs, an LLM employing self-critique is less likely to make contradictory statements.

Acknowledges Limits of Knowledge

  • LLMs can indicate uncertainty rather than giving dubious answers

Self-critique also allows LLMs to acknowledge the limits of their knowledge. When unable to fully resolve a flawed response, the model can indicate this uncertainty to the user rather than providing a dubious final answer. Admitting the need for more context improves transparency.

Challenges and Limitations

  • Effective critique needs strong assessment capabilities
  • Requires more computing resources
  • Smaller models may lack skills for robust self-critique

Of course, effective self-critique depends on the underlying LLM's assessment capabilities. Smaller models may lack the needed logical reasoning skills. Self-critique also requires more computing resources. Yet the benefits may well justify these costs for certain applications.


Applying in ChatGPT

Simple follow-up questions prompt ChatGPT to re-analyze its prior answers. Asking the model to detail its response step-by-step or break it down methodically forces reconsideration and can catch omissions. Queries like "Could you please re-examine your last reply to check for flaws?" also initiate a re-evaluation process.

Here are some ways you may ask for self-critique

  1. Could you go over your previous response step by step to ensure nothing was overlooked?
  2. Would you mind breaking down your last reply to check for any omissions?
  3. I'd appreciate it if you could re-examine your recent answer, breaking it down step by step to see if something was left out.
  4. Can you analyze your last response in detail to ensure you covered everything?
  5. Please dissect your previous answer step by step and verify if all aspects were addressed.
  6. Kindly go through your last reply in a stepwise fashion to ascertain if any points were missed.
  7. I'd like you to revisit your latest response, going through each part methodically to see if something was omitted.
  8. Can you methodically break down your previous answer to make sure you didn't skip anything?
  9. Would you do a detailed walkthrough of your last response to confirm nothing was forgotten?
  10. Please recheck your prior response, detailing it out step by step, ensuring no details were missed.
  11. Could you go over your previous response step by step to identify any possible errors?
  12. Would you mind breaking down your last answer to see if there were any inaccuracies?
  13. Can you dissect your last response in a stepwise fashion to pinpoint any potential mistakes?
  14. Please analyze your recent answer in detail to ensure there are no flaws.
  15. I'd appreciate it if you could methodically examine your previous response to detect any shortcomings.
  16. Kindly revisit your last reply, assessing each part for any potential inconsistencies.
  17. Would you consider scrutinizing your latest response step by step to see if there's anything amiss?
  18. Can you perform a detailed check of your previous answer to ensure it's free from flaws?
  19. I'd like you to systematically review your last statement to spot any potential errors.
  20. Please walk through your prior response, step by step, keeping an eye out for any possible mistakes.

Application Using a Review Agent

Additionally, using a more specialized "review agent" persona provides another avenue for self-critique. After the initial response, the user can prompt ChatGPT to critique itself from the perspective of an expert. For example: "Please act as a medical reviewer and evaluate your previous diagnosis for any errors or limitations." This leverages ChatGPT's ability to adopt different personas.

These techniques demonstrate ways even current limited self-reflection can improve consistency and reduce mistakes. More advanced self-critique functionality will emerge as conversational AI continues progressing. ChatGPT's future iterations may have explicit self-critique modules that automatically assess generated responses before output.

Here is an example using a gynaecologist persona as a review agent for self-critique in ChatGPT:

User: I've been having some abnormal bleeding between periods along with pelvic pain. What could be causing my symptoms?

ChatGPT: Based on your symptoms of irregular bleeding and pelvic pain, some possible causes could include uterine fibroids, endometriosis, or polyps. It would be best to make an appointment with your gynaecologist for an exam and potential imaging tests to identify the underlying issue.

User: Please act as a gynaecologist reviewing your previous response. Evaluate it for any errors or limitations.

ChatGPT (as gynaecologist): Upon review of the previous response, a couple of additional points should be noted:

  • Cervical cancer screening should be up-to-date, as abnormal bleeding can occasionally indicate cervical abnormalities.
  • Certain medications can also cause irregular menses, so a medication history should be obtained.
  • Pelvic inflammatory disease is another potential cause if sexual activity is a factor.
  • An ultrasound, MRI, or scope exam of the uterus may be warranted for further evaluation.
  • Endometrial biopsy could be considered to rule out uterine cancer depending on risk factors.
  • Discussion of family history is advisable to assess any genetic predispositions.

While the initial response covered the most common causes, as a gynaecologist I would recommend gathering a more thorough history and discussing the need for further testing to reach a definitive diagnosis and rule out malignancy. Providing reassurance while investigating is important.


Done well, self-critique could make LLMs smarter, more consistent, and more transparent. Further research will uncover best practices for self-reflective design. However, the principle of enabling AI to critique its thinking lays a solid foundation. LLMs that refine their logic through internal feedback hold great promise for the future.

Read next