openai / transformer-debugger
☆4,058Updated 8 months ago
Alternatives and similar repositories for transformer-debugger:
Users that are interested in transformer-debugger are comparing it to the libraries listed below
- PyTorch native post-training library☆4,856Updated this week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,790Updated 2 months ago
- A framework for few-shot evaluation of language models.☆7,848Updated this week
- Tools for merging pretrained large language models.☆5,260Updated last week
- Accessible large language models via k-bit quantization for PyTorch.☆6,697Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆10,325Updated this week
- Modeling, training, eval, and inference code for OLMo☆5,200Updated this week
- Go ahead and axolotl questions☆8,648Updated this week
- ☆2,332Updated last week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆6,795Updated 7 months ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,378Updated 2 months ago
- Train transformer language models with reinforcement learning.☆11,782Updated this week
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆4,879Updated 3 weeks ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆3,845Updated last month
- Fast and memory-efficient exact attention☆15,541Updated this week
- Robust recipes to align language models with human and AI preferences☆5,001Updated 3 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,250Updated 8 months ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆11,592Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,702Updated last month
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,446Updated 11 months ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,220Updated 9 months ago
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆9,403Updated 7 months ago
- Large Language Model Text Generation Inference☆9,777Updated this week
- A PyTorch native library for large model training☆3,326Updated this week
- An Open-source Toolkit for LLM Development☆2,758Updated last month
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆1,946Updated last month
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,361Updated 10 months ago
- High-speed Large Language Model Serving for Local Deployment☆8,106Updated this week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,712Updated 6 months ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆17,363Updated this week