gautierdag / bpeasyLinks
Fast bare-bones BPE for modern tokenizer training
☆174Updated 6 months ago
Alternatives and similar repositories for bpeasy
Users that are interested in bpeasy are comparing it to the libraries listed below
Sorting:
- A puzzle to learn about prompting☆135Updated 2 years ago
- RuLES: a benchmark for evaluating rule-following in language models☆245Updated 10 months ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆355Updated last year
- code for training & evaluating Contextual Document Embedding models☆201Updated 7 months ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆328Updated this week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆181Updated 6 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆233Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆259Updated 2 years ago
- ☆94Updated 2 years ago
- JAX implementation of the Llama 2 model☆215Updated last year
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆329Updated 2 months ago
- Extract full next-token probabilities via language model APIs☆248Updated last year
- Understand and test language model architectures on synthetic tasks.☆248Updated 3 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- ☆287Updated last year
- Website for hosting the Open Foundation Models Cheat Sheet.☆269Updated 8 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- ☆314Updated last year
- ☆150Updated 4 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆278Updated last year
- Annotated version of the Mamba paper☆493Updated last year
- Long context evaluation for large language models☆225Updated 10 months ago
- A comprehensive deep dive into the world of tokens☆227Updated last year
- ☆92Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 10 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆244Updated 7 months ago
- ☆225Updated last month
- Simple Byte pair Encoding mechanism used for tokenization process . written purely in C☆142Updated last year
- MoE training for Me and You and maybe other people☆309Updated last week