ezelikman / quiet-starLinks
Code for Quiet-STaR
☆739Updated last year
Alternatives and similar repositories for quiet-star
Users that are interested in quiet-star are comparing it to the libraries listed below
Sorting:
- ☆1,035Updated 10 months ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,207Updated last year
- ☆964Updated 9 months ago
- [ICML 2024] Official repository for "Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models"☆797Updated last year
- Official repository for ORPO☆464Updated last year
- RewardBench: the first evaluation tool for reward models.☆646Updated 4 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆892Updated last month
- [NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models☆666Updated 4 months ago
- Large Reasoning Models☆805Updated 10 months ago
- FuseAI Project☆583Updated 9 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆782Updated 7 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆924Updated 8 months ago
- ☆546Updated 11 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆750Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,398Updated last year
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"☆470Updated last year
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆750Updated last year
- Recipes to scale inference-time compute of open models☆1,114Updated 5 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆477Updated last year
- Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".☆647Updated 2 months ago
- Codebase for Merging Language Models (ICML 2024)☆855Updated last year
- Automatic evals for LLMs☆550Updated 4 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆661Updated last year
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆283Updated last year
- Generative Representational Instruction Tuning☆675Updated 4 months ago
- [NeurIPS 2022] 🛒WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents☆415Updated last year
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆356Updated last year
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆363Updated last year
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆573Updated 2 months ago
- ICML 2024: Improving Factuality and Reasoning in Language Models through Multiagent Debate☆477Updated 6 months ago