From the course: Prompt Engineering with LangChain

Unlock the full course today

Join today to access over 24,900 courses taught by industry experts.

Self-consistency

Self-consistency

- [Instructor] Self-consistency prompting was introduced in the March 2022 paper, "Self-Consistency Improves Chain of Thought Reasoning in Large Language Models," by Wang, et al. So what is self-consistency prompting? The idea behind self-consistency is that complex reasoning problems often have multiple valid ways of thinking that lead to the same correct answer. By exploring these diverse reasoning paths, you can achieve a more reliable and consistent answer. How's this different, though, from other prompting techniques? Well, traditional chain of thought prompting, which we've already seen, prompts a language model to generate a series of short sentences that mimic the reasoning process a person might use to solve a task. Self-consistency, instead of just taking the most probable reasoning path, will sample multiple paths and then determine the most consistent answer among them. Unlike methods that require additional training, human annotations, or auxiliary models…

Contents