openai / weak-to-strongLinks
☆2,552Updated last year
Alternatives and similar repositories for weak-to-strong
Users that are interested in weak-to-strong are comparing it to the libraries listed below
Sorting:
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,220Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆987Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,405Updated last year
- ☆4,110Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,156Updated 3 months ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,073Updated 2 years ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,623Updated last year
- ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting wit…☆1,104Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,635Updated last year
- Official repository of Evolutionary Optimization of Model Merging Recipes☆1,382Updated 11 months ago
- ☆4,181Updated 3 months ago
- Training LLMs with QLoRA + FSDP☆1,531Updated last year
- AllenAI's post-training codebase☆3,317Updated last week
- ☆961Updated last year
- Modeling, training, eval, and inference code for OLMo☆6,168Updated last month
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,693Updated last year
- A curated list of Large Language Model (LLM) Interpretability resources.☆1,444Updated 5 months ago
- LLM Transparency Tool (LLM-TT), an open-source interactive toolkit for analyzing internal workings of Transformer-based language models. …☆849Updated 11 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,906Updated 3 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,799Updated 5 months ago
- ☆1,011Updated 9 months ago
- ☆1,056Updated last year
- Robust recipes to align language models with human and AI preferences☆5,427Updated 2 months ago
- A simple, performant and scalable Jax LLM!☆1,994Updated last week
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,324Updated last year
- A PyTorch native platform for training generative AI models☆4,754Updated this week
- Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment☆1,042Updated last year
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI☆773Updated last year
- An Open-source Toolkit for LLM Development☆2,794Updated 10 months ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,068Updated last year