LLM Prompting Models: Introducing Tree of Thought

Blog post description.

AILARGE LANGUAGE MODELS

5/24/20232 min read

The field of Artificial Intelligence has seen great strides with the advent of Large Language Models (LLMs) like GPT-4. However, as with any new technology, there's room for improvement. One way to increase the efficacy of these models is by enhancing the way we prompt them, allowing them to generate more nuanced and contextually relevant outputs.

From Input-Output to Chain of Thought Prompts

Traditional prompting has been based on an input-output mechanism. You provide an input prompt, and the LLM generates an output based on this input. But this straightforward method can sometimes result in lackluster responses.

A more refined approach is the Chain of Thought prompting. In this method, instead of asking for a final answer immediately, the LLM is guided through a series of smaller steps that cumulatively lead to the final conclusion. It resembles how humans often approach complex problems, breaking them down into smaller, more manageable parts.

A possible explanation for the success of Chain of Thought prompting is the evolving contextual field that it provides for the LLM. Each token given to the LLM, whether from the initial prompt or the LLM's own responses, contributes to adjusting the weights and directing the model's responses. This iterative context building results in a richer, more nuanced conversation that can lead to better conclusions.

Another promising method is known as Self-Consistency. At a basic level, this involves prompting the LLM, then asking it to evaluate the quality of its own response. This concept, when combined with Chain of Thought prompting, gives rise to more complex methodologies, such as Chain of Thought Voting and Tree of Thought.

In Chain of Thought Voting, the LLM goes through multiple chains of thoughts, each leading to distinct conclusions. These conclusions are then presented to the LLM, which uses them to generate a self-consistency enforced output.

For example, let's consider a case where the LLM is asked to suggest the best location for a new restaurant. It might go through three chains of thoughts considering various factors like population density, local food trends, and economic conditions. Based on these different chains, it might suggest three different locations. In the voting phase, the LLM would evaluate the potential of each location considering all factors simultaneously and then pick the one that is most consistent and optimal.

Another strategy is Chain of Thought Refinement. Here, the LLM cycles through its thought process, refining the main thought chain based on newly created sub-chains. This iterative process continues until the desired conclusion is reached. It's akin to the process of brainstorming, where multiple ideas are proposed, evaluated, and refined to enhance the main argument.

Lastly, the Tree of Thought approach takes Chain of Thought prompting and expands it further. Here, multiple unique chains of thought are initiated and evaluated. The most promising chains are then expanded, adding new branches of thought at each stage. Self-consistency is enforced at each level to ensure the conclusions are coherent and relevant.

In a nutshell, prompting techniques can vastly enhance the effectiveness of LLMs. By implementing refined methods like Chain of Thought, Chain of Thought Voting, and Tree of Thought, we can guide these models to provide more accurate, relevant, and insightful outputs, thereby maximizing their potential. With ongoing research and experimentation in this space, the future of AI communication seems to be on an exciting trajectory.