Intelligent-Internet / ii-thought
☆12Updated last week
Alternatives and similar repositories for ii-thought:
Users that are interested in ii-thought are comparing it to the libraries listed below
- Simple examples using Argilla tools to build AI☆52Updated 4 months ago
- Train your own SOTA deductive reasoning model☆81Updated 3 weeks ago
- Data preparation code for CrystalCoder 7B LLM☆44Updated 10 months ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆59Updated 7 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆56Updated 2 weeks ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆169Updated 2 months ago
- ☆107Updated 2 weeks ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 2 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆208Updated 5 months ago
- ☆36Updated 2 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated 11 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆91Updated 3 weeks ago
- ☆66Updated 10 months ago
- Source code for our paper: "SelfGoal: Your Language Agents Already Know How to Achieve High-level Goals".☆65Updated 9 months ago
- Small, simple agent task environments for training and evaluation☆18Updated 5 months ago
- ☆112Updated 6 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆140Updated last week
- Pre-training code for CrystalCoder 7B LLM☆54Updated 10 months ago
- ☆111Updated 3 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 8 months ago
- ☆30Updated 8 months ago
- 🚢 Data Toolkit for Sailor Language Models☆88Updated last month
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆210Updated last week
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆39Updated 2 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- ☆80Updated last month
- ☆152Updated 8 months ago
- ☆50Updated 4 months ago
- ☆144Updated last month
- Data preparation code for Amber 7B LLM☆86Updated 10 months ago