openai / weak-to-strongLinks
☆2,544Updated last year
Alternatives and similar repositories for weak-to-strong
Users that are interested in weak-to-strong are comparing it to the libraries listed below
Sorting:
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,398Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆986Updated last year
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,209Updated last year
- ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting wit…☆1,101Updated last year
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,061Updated 2 years ago
- A unified evaluation framework for large language models☆2,738Updated 3 weeks ago
- LLM Transparency Tool (LLM-TT), an open-source interactive toolkit for analyzing internal workings of Transformer-based language models. …☆841Updated 10 months ago
- Training LLMs with QLoRA + FSDP☆1,527Updated 11 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,616Updated last year
- ☆4,101Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,135Updated 2 months ago
- Official repository of Evolutionary Optimization of Model Merging Recipes☆1,374Updated 11 months ago
- Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"☆3,323Updated last year
- ☆945Updated last year
- Robust recipes to align language models with human and AI preferences☆5,412Updated last month
- A curated list of Large Language Model (LLM) Interpretability resources.☆1,432Updated 4 months ago
- Modeling, training, eval, and inference code for OLMo☆6,055Updated last week
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,062Updated last year
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,327Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,623Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,620Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,687Updated last year
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,771Updated last year
- PyTorch code and models for V-JEPA self-supervised learning from video.☆3,252Updated 8 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,891Updated 2 months ago
- Code for Quiet-STaR☆739Updated last year
- A PyTorch native platform for training generative AI models☆4,604Updated this week
- Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment☆1,042Updated last year
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,262Updated 3 months ago
- Data and tools for generating and inspecting OLMo pre-training data.☆1,338Updated last month