Skip to content
Home » Advanced Prompting & Reasoning Techniques in Large Language Models 

Advanced Prompting & Reasoning Techniques in Large Language Models 

Advanced-Prompting-Reasoning-Techniques-in-Large-Language-Models-scaled.jpg

Large Language Models (LLMs) have recently demonstrated remarkable capabilities in natural language processing tasks and beyond. To take the best benefits from them we should know how to prompt and guide them to reason effectively and unlock their full potential. While current LLMs are sensitive to prompt design, there is no clear principle of how to write the optimal prompt, but there are several techniques each is effective for specific tasks. 

Simple prompting can output a lot, but with less control over how the output will look. Therefore, a general rule is: the more context—examples, constraints, or reasoning steps—the more control you’ll have over the output. 

So let’s discover the different advanced prompting and reasoning techniques. 

  1. Zero-Shot Prompting:  

The straightforward yet powerful technique where you prompt with clear instruction but without any training examples. And the model relies on its pretrained knowledge to generate the response. 

  1. Few-Shot Prompting:  

For more control over the output you can use this technique by adding training examples to your prompt -Question and Answers / Inputs and Outputs-to force the model to respond with the same structure of the given examples. 

  1. Chain-of-Thought prompting (CoT):  

CoT enables complex reasoning through intermediate steps before arriving to final answer. The rule here is the more complex questions the more reasoning steps the more output quality. But how to reduce the manual work for writing these questions? Instead of manually hand-crafting examples, “Let’s think step by step” sentence is used in the prompt to generate reasoning chains. This technique is known as Zero-shot CoT, which can still end up with mistakes in generated chains due to lack of diversity in the generated chains. That’s why Auto-CoT is introduced to solve this problem by sampling questions with diversity before chain generation.  

Advanced-Prompting-Reasoning-Techniques-in-Large-Language-Models-scaled.jpg

Zhang et al., Automatic Chain-of-thought Prompting in Large Language Models, ICLR 2023. 

There are also several CoT versions like Tab-CoT which suggests to structure the reasoning process in CoT in the form of a table, which is very effective for tasks having multiple aspects. 

  1. Analogical Prompting:  

This Technique outperforms 0-shot CoT and manual few-shot CoT, where prompting the LLM to first recall relevant examples before solving the problem, helping it generalize and solve the original problem more effectively. Here you’re not just reasoning you’re drawing a parallel between two related domains. 

  1. Least-to-Most Decomposition:  

This dynamic reasoning technique states another rule: easy-to-hard generalization via decomposition, where instead of attempting to solve the entire problem at once, the model starts by focusing on the simplest subproblems and incrementally moves to more complex ones, reducing cognitive load on the model and minimizing errors in reasoning. 

Zhou et al., Least-to-Most Prompting Enables Complex Reasoning in Large Language Models, ICLR 2023.  

  1. Self-consistency:  

It is a decoding strategy not a prompt style, it samples multiple diverse reasoning paths through few-shot CoT, and then selects the most consistent answer from the generated responses. 

  1. Chain of Drafts (CoD):  

A novel reasoning technique stating a new rule: Reasoning does not require lengthy explanations. CoD generates concise outputs with better accuracy using as little as 7.6% of the tokens compared to CoT, where the model is first asked to generate an initial draft of a response, and then it is prompted to critically review and improve that draft based on specific criteria or further instructions. This process can be repeated for multiple drafts for creating a high quality response. 

  1. Memory-of-Thought (MoT): 

MoT equips LLMs with the ability to pre-think, store, and recall past reasoning paths, enhancing their performance on various reasoning tasks. 

To facilitate comparison, the following table presents a structured summary of the prompting methods reviewed, including typical prompt formats. 

Technique Example Prompt 
Standard Prompting  “Iron deficiency anemia is” 
Zero-Shot Prompting “What are the symptoms and treatments for iron deficiency anemia?” 
Few-Shot Prompting “Q: What causes high blood pressure? A: Stress, salt, genetics. Q: What are the symptoms and treatments for iron deficiency anemia?” 
Chain-of-Thought  “What are the symptoms and treatments for iron deficiency anemia? Let’s think step by step: What is the condition? What causes it? What are the symptoms? How is it diagnosed? What are the treatment options?” 
Zero-shot CoT “What are the symptoms and treatments for iron deficiency anemia? Let’s think step by step.” 
Auto-CoT (Provide a few diverse examples of Q&A with step-by-step reasoning, then the new query)  Example 1: Q: What is the capital of France? A: Let’s think step by step. France is in Europe. Its largest city and seat of government is Paris. So, the capital of France is Paris.  Example 2: Q: What is 2 + 2? A: Let’s think step by step. 2 plus 2 equals 4. So, 2 + 2 = 4.  Q: What are the symptoms and treatments for iron deficiency anemia? (Model automatically generates step-by-step reasoning for this new query) 
Tab-CoT “Explain iron deficiency anemia using a table with columns for cause, symptoms, diagnosis, and treatment.” 
Analogical Prompting “Iron deficiency anemia is like a car running on low fuel. Use this analogy to explain symptoms and treatment.” 
Least-to-Most “First explain what iron is. Then define anemia. Then describe iron deficiency anemia and how it’s treated.” 
Self-Consistency Q: What are the  treatments for iron deficiency anemia? Let’s think step by step. Then sample multiple answers using a high temperature to allow diversity.  Generated answers:  “It is treated with iron supplements.”  “Treatment involves dietary changes and oral iron.”  “Iron injections and blood transfusions are used.”  “Treatment includes supplements and iron-rich foods.”  “Treatment includes iron pills and treating root causes.”  Now, choose the most frequent or consensus final answer: → “Iron supplements and iron-rich foods.” 
Chain of Drafts  “Write a brief explanation of iron deficiency anemia. Then improve the explanation in one more draft.” 
Memory-of-Thought  “Recall your previous explanation of anemia causes. Now expand it with more detail on iron metabolism.” 

To summarize, choosing the right prompting technique depends on your specific needs and the complexity of the task at hand. The Zero-Shot is the first prompting approach to try due to its simplicity and effectiveness. However, when the desired output requires a specific format, The Few-Shot will be a good choice. If the task is more complex and requires several reasoning steps or calculations, The CoT versions will be the best choice. When a new or difficult concept needs to be made more accessible or intuitive. Analogies can bridge the gap between known and unknown information. Therefore, using analogical prompting will help. If you want to debug, learn, or track large task Least-to-Most prompting will help divide this complex task into simpler subtasks.When the highest possible accuracy is required, especially for complex reasoning problems where a single CoT path might still lead to errors, the self-consistency technique is the best choice. When the initial output of an LLM might be “good enough” but not excellent, CoD pushes it towards higher standards of accuracy, especially for creating detailed reports, essays, or creative writing tasks.If you had a long conversation with your model and want it to leverage past successful reasoning from previous interactions, use memory-of-thought to recall and apply those effective patterns. 

At Intixel, we apply advanced prompting techniques to effectively guide LLMs in supporting our data generation and validation processes. 

 
——————————————————————————————————