gautierdag / bpeasy
Fast bare-bones BPE for modern tokenizer training
☆142Updated last month
Related projects ⓘ
Alternatives and complementary repositories for bpeasy
- ☆91Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆84Updated this week
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆195Updated 6 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆237Updated 4 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆221Updated 3 weeks ago
- A puzzle to learn about prompting☆121Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆179Updated 5 months ago
- RuLES: a benchmark for evaluating rule-following in language models☆211Updated last month
- ☆292Updated 5 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆113Updated 7 months ago
- code for training & evaluating Contextual Document Embedding models☆119Updated this week
- Simple Transformer in Jax☆119Updated 5 months ago
- ☆73Updated 4 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆252Updated last year
- Muon optimizer for neural networks: >30% extra sample efficiency, <3% wallclock overhead☆113Updated this week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆229Updated 3 weeks ago
- Extract full next-token probabilities via language model APIs☆229Updated 9 months ago
- Normalized Transformer (nGPT)☆87Updated this week
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- Sparse autoencoders☆344Updated last week
- Experiments for efforts to train a new and improved t5☆76Updated 7 months ago
- A comprehensive deep dive into the world of tokens☆214Updated 4 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆178Updated 3 months ago
- JAX implementation of the Llama 2 model☆210Updated 9 months ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆334Updated 3 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆158Updated last month
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆80Updated 11 months ago
- Textbook on reinforcement learning from human feedback☆76Updated 3 weeks ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆93Updated last month
- A MAD laboratory to improve AI architecture designs 🧪☆95Updated 6 months ago