The paper introduces "ADAPT," a novel method for using Large Language Models (LLMs) in decision-making tasks involving planning and adapting to the environment. This approach significantly improves task success rates by dynamically decomposing complex sub-tasks as needed, particularly when standard methods struggle with task complexity.

Key Points

  • Overview and Purpose: "ADAPT" (As-Needed Decomposition and Planning with Language Models) addresses the limitations of existing LLM-based methods in complex interactive decision-making tasks. It uses recursive decomposition and planning to adapt to task complexity and LLM capabilities.
  • Existing Approaches and Limitations: Traditional methods use LLMs in two ways: iterative executors and plan-and-execute approaches. Both struggle with complex tasks, especially if sub-tasks are too complex to execute, leading to overall task failure.
  • ADAPT's Approach: ADAPT plans and decomposes sub-tasks only when necessary. It employs separate planner and executor LLM modules, recursively decomposing sub-tasks to adapt to task complexity. The process dynamically adjusts based on the executor LLM's capabilities and the task's complexity.
  • Performance and Advantages:
    • ADAPT outperforms established baselines in various datasets (ALFWorld, WebShop, TextCraft) by significant margins.
    • It effectively addresses execution failures by further decomposing complex sub-tasks via the planner.
    • The method demonstrates dynamic adaptation to varying LLM execution capabilities and task complexities.
  • Methodology:
    • Executor LLM: Executes tasks in a given environment with a set of atomic skills and determines task completion.
    • Planner LLM: Breaks down complex tasks into sub-tasks, using logical operators (AND, OR) for task combination.
    • Controller: Integrates executor and planner modules within an LLM program, managing task delegation and termination criteria.
  • Datasets and Experimental Setup: ADAPT was evaluated using three datasets: ALFWorld (text-based game), WebShop (online shopping environment), and TextCraft (text-based crafting game).
  • Comparative Results: Across all datasets, ADAPT achieved the highest success rates compared to baselines like ReAct, Plan-and-Execute, Try Again with ReAct, and Reflexion.
  • Analysis and Discussion:
    • ADAPT's performance scales with the depth of decomposition.
    • It caters to different execution capabilities of LLMs, with experiments showing its adaptability with various executor prompts and LLM combinations.
    • It incorporates task complexity, adapting its approach based on the inherent complexity of tasks in datasets.
  • Conclusion: ADAPT represents a significant advancement in using LLMs for complex decision-making tasks, offering a flexible, dynamic approach that significantly enhances performance and adapts to varying task demands and LLM capabilities.

The ADAPT Process

The ADAPT (As-Needed Decomposition and Planning with Language Models) process is a method designed to improve the performance of Large Language Models (LLMs) in complex decision-making tasks. It involves a series of steps that dynamically adjust the approach based on the complexity of the task and the capabilities of the LLM. Below is a detailed step-by-step breakdown of the ADAPT process, along with the example used from the paper for clarity.

From the ADAPT Paper

1. Task Initialization

  • Overview: The process begins with a high-level task that needs to be accomplished.
  • Example: Imagine the task is "Put a clean mug on a desk in a simulated household environment."

2. Executor LLM Attempts Task Execution

  • Overview: The executor LLM tries to perform the task directly. It uses a set of atomic skills predefined for the environment.
  • Example: The LLM might start by searching for a mug in the kitchen. If it finds a mug, it moves to the next step of cleaning the mug.

3. Assessing Task Completion

  • Overview: After attempting the task, the LLM assesses whether the task has been completed or not.
  • Example: If the LLM cannot find the mug, it concludes that the task is incomplete or failed.

4. Invoking the Planner LLM for Decomposition (If Needed)

  • Overview: If the executor LLM fails to complete the task, the planner LLM is called to decompose the task into smaller, more manageable sub-tasks.
  • Example: The planner may decompose the task into sub-tasks like "Find a mug on the countertop," "Find a mug in the cabinet," and "Clean the mug."

5. Recursive Decomposition and Execution

  • Overview: Each sub-task is then treated as a new task and is passed back to the executor LLM. This process is recursive – if a sub-task is still too complex, it can be broken down further.
  • Example: If the executor cannot find a mug on the countertop, the task "Find a mug in the cabinet" is attempted. If it finds a mug in the cabinet, the next sub-task "Clean the mug" is initiated.

6. Dynamic Adjustment Based on Executor Capability

  • Overview: ADAPT dynamically adjusts the task decomposition based on the executor's success or failure at each sub-task, aligning with the executor's capabilities.
  • Example: If the LLM successfully executes simpler sub-tasks (like cleaning the mug) but struggles with more complex ones (like locating the mug), the decomposition will focus more on the complex parts.

7. Logical Combination of Sub-tasks

  • Overview: Sub-tasks are combined using logical operators (AND, OR) to form a coherent plan towards completing the main task.
  • Example: The plan might be formulated as "Find a mug on the countertop OR in the cabinet" AND "Clean the mug" AND "Place the mug on the desk."

8. Task Completion and Feedback

  • Overview: Upon completion of all sub-tasks, the main task is considered complete. The process includes feedback mechanisms for learning and improvement.
  • Example: Once the mug is placed on the desk, the task is marked as completed. If the process failed at any point, that feedback is used to improve future task decompositions and executions.

ADAPT enhances the capability of LLMs in complex tasks by intelligently breaking down tasks into smaller parts that are within the LLM's ability to execute. This method provides a way to overcome the limitations of current LLMs, especially in environments that require a series of interdependent actions. By dynamically adjusting to the complexity of the task and the capability of the LLM, ADAPT ensures a higher success rate in task completion compared to traditional methods.


Lets Adapt the ADAPT Framework to Prompting

The ADAPT (As-Needed Decomposition and Planning with Language Models) process offers a structured approach to prompt engineering, especially for complex tasks that Large Language Models (LLMs) might initially struggle with. Here's how ADAPT can be applied in prompt engineering:

1. Define the High-Level Task

  • Initial Prompt Formulation: Begin by clearly defining the high-level task in the prompt. This step sets the stage for the LLM to understand the overall objective.
  • Example: "Write a detailed guide on how to bake a chocolate cake."

2. Executor LLM's Initial Attempt

  • Assess LLM's Current Capabilities: Use the LLM to attempt the task based on the initial prompt. Analyze the output to identify areas of strength and weakness in its response.
  • Example: The LLM provides a general overview of baking a cake but misses specific details like ingredient measurements or baking time.

3. Identify Gaps and Challenges

  • Diagnosis: Review the LLM's response to identify specific aspects of the task it struggled with. This step is crucial for understanding what needs to be decomposed or detailed further.
  • Example: The LLM may not specify the quantity of ingredients or the exact baking temperature.

4. Decompose Task into Sub-Tasks

  • Detailed Prompting: Break down the high-level task into more detailed, specific sub-tasks. Create prompts that focus on these sub-tasks individually.
  • Example: Create separate prompts like "List the ingredients and their quantities for a chocolate cake" or "Explain the process of mixing cake ingredients."

5. Recursive Prompt Refinement

  • Iterative Refinement: Use the outputs from the sub-task prompts to refine the overall response. This process may involve several iterations of prompting and response analysis.
  • Example: Continuously refine prompts based on the LLM's responses until each aspect of baking a cake is thoroughly covered.

6. Combine Sub-Task Responses

  • Synthesis: Integrate the responses from the sub-tasks to form a comprehensive and coherent final output.
  • Example: Combine the detailed ingredient list, mixing instructions, and baking guidelines into a complete guide.

7. Dynamic Adjustment and Learning

  • Feedback Loop: Use the success or failure of sub-task responses to inform future prompt engineering strategies. This approach helps in learning how to better structure prompts for complex tasks.
  • Example: If the LLM provides better responses when asked about specific steps in detail, use this approach for similar tasks in the future.

8. Refining and Finalizing the Output

  • Output Optimization: Ensure that the final output is cohesive, covers all aspects of the task, and is presented in a logical, easy-to-understand manner.
  • Example: Ensure the final guide on baking a chocolate cake is well-structured, with a clear sequence from ingredients to serving.

Here's how you would craft the prompts for each step of the process:

1. Define the High-Level Task

  • Initial Prompt: "Write a comprehensive guide on how to bake a chocolate cake, including ingredients, preparation steps, baking, and serving."

2. Executor LLM's Initial Attempt

  • Assessment Prompt: No additional prompt needed at this stage. The initial response to the above prompt is assessed to determine its comprehensiveness and accuracy.

3. Identify Gaps and Challenges

  • Diagnosis Prompt: Again, no additional prompt is required. Instead, review the LLM's response to identify gaps or areas needing more detail.

4. Decompose Task into Sub-Tasks

  • Sub-Task Prompts:
    • Ingredients Prompt: "List all the ingredients required to bake a chocolate cake, including specific quantities for each."
    • Preparation Steps Prompt: "Describe the step-by-step process of preparing the batter for a chocolate cake."
    • Baking Instructions Prompt: "Provide detailed instructions on how to bake a chocolate cake, including oven temperature, positioning, and timing."
    • Decoration and Serving Prompt: "Explain how to decorate and serve a chocolate cake, including suggestions for frosting and presentation."

5. Recursive Prompt Refinement

  • Iterative Refinement Prompts: Based on responses, refine each sub-task prompt.
    • If the LLM's response to the "Preparation Steps" prompt is vague, a refined prompt could be: "Detail the process of mixing the ingredients for a chocolate cake, including the order of mixing and techniques used."

6. Combine Sub-Task Responses

  • Synthesis Prompt: No new prompt is needed here. Instead, the responses from each sub-task are combined to form a complete guide.

7. Dynamic Adjustment and Learning

  • Feedback Loop Prompt: Based on the synthesis, ask for improvements or missing elements. For example: "Review the complete chocolate cake guide and suggest any improvements for clarity and completeness."

8. Refining and Finalizing the Output

  • Output Optimization Prompt: "Revise the chocolate cake guide to ensure it flows logically, is easy to follow, and covers all aspects of the baking process from start to finish."

Each prompt targets a specific aspect of the task, ensuring that the LLM's responses, when combined, provide a thorough and detailed guide. This process highlights the ADAPT methodology's utility in breaking down complex tasks into manageable sub-tasks, leading to more effective and comprehensive outputs from LLMs.


Applying the ADAPT methodology to prompt engineering enables a systematic and structured way to extract detailed and accurate information from LLMs, especially for complex tasks. It involves breaking down a task, iteratively refining prompts, and synthesizing the information into a comprehensive response. This approach can significantly enhance the effectiveness of LLMs in producing detailed, accurate, and relevant outputs.

Share this post