openai / transformer-debugger
☆4,050Updated 7 months ago
Alternatives and similar repositories for transformer-debugger:
Users that are interested in transformer-debugger are comparing it to the libraries listed below
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,749Updated last month
- PyTorch native post-training library☆4,703Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆7,353Updated this week
- Tools for merging pretrained large language models.☆5,113Updated last week
- Modeling, training, eval, and inference code for OLMo☆5,024Updated this week
- A framework for few-shot evaluation of language models.☆7,474Updated this week
- PyTorch code and models for V-JEPA self-supervised learning from video.☆2,745Updated 5 months ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆11,197Updated this week
- Train transformer language models with reinforcement learning.☆10,609Updated this week
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆9,326Updated 6 months ago
- Robust recipes to align language models with human and AI preferences☆4,896Updated last month
- The official PyTorch implementation of Google's Gemma models☆5,338Updated last week
- Go ahead and axolotl questions☆8,293Updated this week
- An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT)☆3,761Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆6,522Updated this week
- A PyTorch native library for large model training☆3,091Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆6,749Updated 6 months ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆5,169Updated this week
- DataComp for Language Models☆1,206Updated last month
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,113Updated 8 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,669Updated last week
- Reference implementation for DPO (Direct Preference Optimization)☆2,323Updated 5 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,425Updated 10 months ago
- DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model☆4,035Updated 3 months ago
- Fast and memory-efficient exact attention☆15,064Updated this week
- An Open-source Toolkit for LLM Development☆2,747Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,168Updated 7 months ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,339Updated last month
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆3,797Updated last week
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,358Updated 9 months ago