openai / weak-to-strong
☆2,528Updated 11 months ago
Alternatives and similar repositories for weak-to-strong:
Users that are interested in weak-to-strong are comparing it to the libraries listed below
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,378Updated last year
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,152Updated last year
- ☆959Updated 3 months ago
- A simple, performant and scalable Jax LLM!☆1,711Updated this week
- Reaching LLaMA2 Performance with 0.1M Dollars☆980Updated 9 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,549Updated 6 months ago
- ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting wit…☆1,066Updated last year
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆1,984Updated last year
- Training LLMs with QLoRA + FSDP☆1,476Updated 6 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,526Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,940Updated 3 weeks ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,736Updated 4 months ago
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI☆769Updated last year
- ☆4,077Updated 11 months ago
- An Open-source Toolkit for LLM Development☆2,776Updated 3 months ago
- A unified evaluation framework for large language models☆2,606Updated last week
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,475Updated 8 months ago
- A curated list of Large Language Model (LLM) Interpretability resources.☆1,321Updated 4 months ago
- Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment☆1,036Updated 11 months ago
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,253Updated this week
- Robust recipes to align language models with human and AI preferences☆5,166Updated last week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,243Updated 2 months ago
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,676Updated 9 months ago
- Llama-3 agents that can browse the web by following instructions and talking to you☆1,400Updated 4 months ago
- A PyTorch native library for large-scale model training☆3,665Updated this week
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,103Updated last year
- [ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct☆2,016Updated 6 months ago
- ☆1,029Updated last year
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,733Updated last year
- TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones☆1,284Updated last year