Chain-of-Thought (CoT)
CoT encourages the LLM to reason step-by-step. It enhances the justifiability of the output. Achieved by adding phrases like "Let's think step by step".
Use Cases: Complex reasoning tasks where logical justification are critical, eg threat modeling.
Comments
CoT encourages the LLM to reason step-by-step. It enhances the justifiability of the output. Achieved by adding phrases like "Let's think step by step".
Use Cases: Complex reasoning tasks where logical justification are critical, eg threat modeling.
OPRO refines prompts based on prev context. It uses a separate LLM to iteratively improve the initial prompt
Ex: OPRO evaluates based on metrics like max tokens, temp, etc. It then suggests mods for subsequent prompts.
Use: Interactive, dynamic dialogue, evolving queries
Incorporates info from knowledge sources
Ex: The paper included an "Interest Summary" derived from content interactions. Wider views overcome sparse ad interaction data
Uses: Recommendations, q&a
Limits: Too much info can overwhelm the LLM or introduce bias
Ex: The LLM can ask questions like "Is the input 'angle' in degrees or radians?"
Uses: Tasks with unclear or ambiguous req's
Limits: Need a mechanism for answering