Tree of Thoughts (ToT) is a powerful and flexible algorithm for leveraging pre-trained language models to solve various problems by exploring multiple reasoning paths. It's designed to be plug-and-play, allowing users to easily connect their models and use the Tree of Thoughts method.
This implementation of Tree of Thoughts is brought to you by Agora, Agora is advancing Humanity with open source SOTA Multi-Modality AI research! We plan on combating Humanity's grandest root problems like food insecurity, planetary insecurity, and disease, and hopefully death itself.
Join our Discord and contribute to this project
Clone this repository with git clone https://github.com/kyegomez/tree-of-thoughts
Navigate to the repository folder: cd tree-of-thoughts
pip install openai
Create a Python script (e.g., example.py) and import the necessary classes:
from tree_of_thoughts import OpenAILanguageModel, CustomLanguageModel, TreeofThoughts, OptimizedOpenAILanguageModel, OptimizedTreeofThoughts
#v1
model = OpenAILanguageModel('api key')
#v2 parallel execution, caching, adaptive temperature
model = OptimizedOpenAILanguageModel('api key')
#choose search algorithm('BFS' or 'DFS')
search_algorithm = "BFS"
#cot or propose
strategy="cot"
# value or vote
evaluation_strategy = "value"
#create an instance of the tree of thoughts class v1
tree_of_thoughts = TreeofThoughts(model, search algorithm)
#or v2 -> dynamic beam width -< adjust the beam width [b] dynamically based on the search depth quality of the generated thoughts
tree_of_thoughts= OptimizedTreeofThoughts(model, search_algorithm)
input_problem = "What are next generation reasoning methods for Large Language Models"
k = 5
T = 3
b = 5
vth = 0.5
#call the solve method with the input problem and other params
solution = tree_of_thoughts.solve(input_problem, k, T, b, vth, )
#use the solution in your production environment
print(solution)
Or Integrate your own custom language model:
class CustomLanguageModel(AbstractLanguageModel):
def __init__(self, model):
self.model = model
def generate_thoughts(self, state, k):
#implement the thought generation logic using self.model
pass
def evaluate_states(self, states):
#implement state evaluation logic using self.model
pass
Run the example script
- General problem-solving framework for language models
- Supports both breadth-first search (BFS) and depth-first search (DFS) algorithms
- Easy integration with popular language models like OpenAI and Hugging Face
- Extensible and adaptable to different problem properties and resource constraints
- Define the thought decomposition based on the problem properties.
- Create a thought generator function G(pθ, s, k) with two strategies: a. Sample i.i.d. thoughts from a CoT prompt. b. Propose thoughts sequentially using a "propose prompt".
- Create a state evaluator function V(pθ, S) with two strategies: a. Value each state independently. b. Vote across states.
- Choose a search algorithm (BFS or DFS) based on the tree structure.
- Implement the chosen search algorithm.
- Execute the chosen search algorithm with the input problem, thought generator, state evaluator, and other required parameters.
class TreeofThoughts:
def __init__(self, model, search_algorithm):
self.model = model
self.search_algorithm = search_algorithm
def solve(self, x, k, T, b, vth):
if self.search_algorithm == 'BFS':
return self.tot_bfs(x, k, T, b)
elif self.search_algorithm == 'DFS':
return self.tot_dfs(x, k, T, vth)
else:
raise ValueError("Invalid search algorithm. Choose 'BFS' or 'DFS'.")
def tot_bfs(self, x, k, T, b):
S0 = {x}
for t in range(1, T + 1):
S0_t = {(*s, z) for s in S0 for z in self.model.generate_thoughts(s, k)}
Vt = self.model.evaluate_states(S0_t)
St = sorted(S0_t, key=lambda s: Vt[s], reverse=True)[:b]
S0 = set(St)
return self.model.generate_thoughts(max(St, key=lambda s: Vt[s]), 1)
def tot_dfs(self, x, k, T, vth):
output = []
def dfs(s, t):
if t > T:
output.append(self.model.generate_thoughts(s, 1))
return
for s_prime in sorted(self.model.generate_thoughts(s, k)):
if self.model.evaluate_states({s_prime})[s_prime] > vth:
dfs((*s, s_prime), t + 1)
dfs(x, 1)
return output
To use Tree of Thoughts with OpenAI's API, create a custom model class that inherits from AbstractLanguageModel
and implements the required methods using OpenAI's API. Then, create an instance of the TreeOfThoughts
class with the custom model and the desired search algorithm ('BFS' or 'DFS').
To use Tree of Thoughts with Hugging Face Transformers, create a custom model class that inherits from AbstractLanguageModel
and implements the required methods using Hugging Face Transformers. Then, create an instance of the TreeOfThoughts
class with the custom model and the desired search algorithm ('BFS' or 'DFS').
This algorithm is still infant yet it's potential remains unimaginable, let's advance the reasoning of AI's together under this banner.
Provide ready to use generate thoughts function -- done
Provide ready to use evaluate states function -- done
now Implement a more sophisticated prompt engineering strategy to guide the model's reasoning process more effectively.
Introduce a reinforcement learning, distillment, and finetuning scripts to finely tune the model based on feedback from the Tree of Thoughts algorithm.
Integrate heuristics that autonomously determine the search algorithm based on indicators
Integrate heuristics that autonomously determine the strategy cos or propose
Integrate heuristics that autonomously set the input params:
k = T = b = vth =
multi-modality tree of thoughts
multi-modality forest of thoughts
multi-modality world of thoughts
The next big advancement for the Tree of Thoughts algorithm is to extend it to multi-modality, enabling it to handle not only text but also images, audio, and other data types. This will bring us closer to multi-modal superintelligence.
- Research and identify suitable multi-modal pre-trained models that can handle various data types (e.g., text, images, audio).
- Adapt the thought decomposition, thought generator, and state evaluator functions to handle multi-modal data.
- Develop a method for combining different modalities in the search tree, allowing the algorithm to reason across different data types.
- Implement and test the multi-modal Tree of Thoughts algorithm with various problems and datasets.
- Optimize the algorithm for performance and resource usage, ensuring it scales well with large multi-modal datasets.
- Publish the results and gather feedback from the community to further improve the multi-modal Tree of Thoughts algorithm.
Join us on this exciting journey to advance the Tree of Thoughts algorithm to multi-modality superintelligence! 🚀
Thanks to: Shunyu Yao Princeton University, Dian Yu Google DeepMind, Jeffrey Zhao, Google DeepMind, Izhak Shafran Google DeepMind, Thomas L. Griffiths, Princeton University, Yuan Cao Google DeepMind, Karthik Narasimha, Princeton University for sharing this amazing work with the world!
And, thanks to Phil Wang or Lucidrains for inspiring me to devote myself to open source AI Research