gautierdag / bpeasyLinks
Fast bare-bones BPE for modern tokenizer training
☆164Updated 2 months ago
Alternatives and similar repositories for bpeasy
Users that are interested in bpeasy are comparing it to the libraries listed below
Sorting:
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆349Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆152Updated last month
- RuLES: a benchmark for evaluating rule-following in language models☆230Updated 5 months ago
- ☆307Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Understand and test language model architectures on synthetic tasks.☆221Updated last month
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated 3 months ago
- code for training & evaluating Contextual Document Embedding models☆197Updated 3 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 9 months ago
- A puzzle to learn about prompting☆132Updated 2 years ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆270Updated last year
- Extract full next-token probabilities via language model APIs☆247Updated last year
- ☆138Updated 4 months ago
- Implementation of the Llama architecture with RLHF + Q-learning☆166Updated 6 months ago
- Annotated version of the Mamba paper☆487Updated last year
- nanoGPT-like codebase for LLM training☆102Updated 3 months ago
- A comprehensive deep dive into the world of tokens☆226Updated last year
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆250Updated 2 weeks ago
- A repository for research on medium sized language models.☆509Updated 2 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆130Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆191Updated last year
- ☆275Updated last year
- ☆93Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆178Updated 5 months ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆812Updated 3 weeks ago
- JAX implementation of the Llama 2 model☆219Updated last year
- Normalized Transformer (nGPT)☆186Updated 9 months ago
- ☆380Updated this week
- A MAD laboratory to improve AI architecture designs 🧪☆124Updated 8 months ago