openai / transformer-debuggerLinks
☆4,100Updated last year
Alternatives and similar repositories for transformer-debugger
Users that are interested in transformer-debugger are comparing it to the libraries listed below
Sorting:
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,121Updated last month
- PyTorch native post-training library☆5,523Updated last week
- Modeling, training, eval, and inference code for OLMo☆6,028Updated last month
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,610Updated 11 months ago
- DataComp for Language Models☆1,371Updated last month
- Robust recipes to align language models with human and AI preferences☆5,390Updated last month
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,608Updated last year
- nanoGPT style version of Llama 3.1☆1,429Updated last year
- Tools for merging pretrained large language models.☆6,352Updated 3 weeks ago
- AllenAI's post-training codebase☆3,232Updated this week
- A PyTorch native platform for training generative AI models☆4,525Updated this week
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,399Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,057Updated last year
- Training LLMs with QLoRA + FSDP☆1,527Updated 11 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,870Updated 2 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,635Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,663Updated this week
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,628Updated 4 months ago
- The official PyTorch implementation of Google's Gemma models☆5,557Updated 4 months ago
- Minimalistic large language model 3D-parallelism training☆2,252Updated last month
- A framework for few-shot evaluation of language models.☆10,303Updated this week
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆9,987Updated last year
- Reference implementation for DPO (Direct Preference Optimization)☆2,745Updated last year
- NanoGPT (124M) in 3 minutes☆3,176Updated 2 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,050Updated last year
- A unified evaluation framework for large language models☆2,723Updated 2 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,856Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆985Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,680Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,768Updated last year