iamhankai / Forest-of-Thought
Forest-of-Thought: Scaling Test-Time Compute for Enhancing LLM Reasoning
☆24Updated 2 weeks ago
Alternatives and similar repositories for Forest-of-Thought:
Users that are interested in Forest-of-Thought are comparing it to the libraries listed below
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆128Updated 8 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆120Updated last month
- FuseAI Project☆83Updated 3 weeks ago
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆49Updated last month
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆40Updated 7 months ago
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆106Updated last month
- ☆96Updated 10 months ago
- Converting Mixtral-8x7B to Mixtral-[1~7]x7B☆21Updated 11 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆90Updated last month
- ☆69Updated last week
- ☆32Updated last month
- On Memorization of Large Language Models in Logical Reasoning☆39Updated 3 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆99Updated 8 months ago
- [preprint] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆41Updated last month
- ☆88Updated last month
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆58Updated 3 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆43Updated 2 weeks ago
- ☆52Updated 5 months ago
- ☆22Updated 7 months ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆53Updated 3 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆28Updated 8 months ago
- [ICLR 2025] SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights☆55Updated last week
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆102Updated 3 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated 9 months ago
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆141Updated last month
- Hammer: Robust Function-Calling for On-Device Language Models via Function Masking☆57Updated this week
- rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking☆34Updated last month