gautierdag / bpeasyLinks
Fast bare-bones BPE for modern tokenizer training
☆165Updated 3 months ago
Alternatives and similar repositories for bpeasy
Users that are interested in bpeasy are comparing it to the libraries listed below
Sorting:
- A puzzle to learn about prompting☆135Updated 2 years ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 11 months ago
- RuLES: a benchmark for evaluating rule-following in language models☆234Updated 7 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆257Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆164Updated 3 months ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆348Updated last year
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆311Updated this week
- Website for hosting the Open Foundation Models Cheat Sheet.☆268Updated 4 months ago
- A comprehensive deep dive into the world of tokens☆225Updated last year
- ☆309Updated last year
- code for training & evaluating Contextual Document Embedding models☆197Updated 4 months ago
- Understand and test language model architectures on synthetic tasks.☆229Updated last week
- Solve puzzles. Learn CUDA.☆63Updated last year
- A repository for research on medium sized language models.☆511Updated 4 months ago
- JAX implementation of the Llama 2 model☆218Updated last year
- ☆94Updated 2 years ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆274Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Extract full next-token probabilities via language model APIs☆248Updated last year
- A really tiny autograd engine☆95Updated 4 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆280Updated last month
- Multipack distributed sampler for fast padding-free training of LLMs☆201Updated last year
- Simple Transformer in Jax☆139Updated last year
- ☆89Updated last year
- ☆221Updated 7 months ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆195Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆193Updated last year
- An interactive exploration of Transformer programming.☆269Updated last year
- Annotated version of the Mamba paper☆490Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year