gautierdag / bpeasyLinks
Fast bare-bones BPE for modern tokenizer training
☆159Updated 2 months ago
Alternatives and similar repositories for bpeasy
Users that are interested in bpeasy are comparing it to the libraries listed below
Sorting:
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆134Updated this week
- Understand and test language model architectures on synthetic tasks.☆217Updated 2 weeks ago
- A MAD laboratory to improve AI architecture designs 🧪☆120Updated 6 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 7 months ago
- ☆92Updated last year
- RuLES: a benchmark for evaluating rule-following in language models☆226Updated 3 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆190Updated last year
- ☆78Updated 11 months ago
- ☆270Updated 11 months ago
- nanoGPT-like codebase for LLM training☆98Updated last month
- Manage scalable open LLM inference endpoints in Slurm clusters☆260Updated 11 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆101Updated 3 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆257Updated last year
- A puzzle to learn about prompting☆128Updated 2 years ago
- Extract full next-token probabilities via language model APIs☆247Updated last year
- Collection of autoregressive model implementation☆85Updated last month
- prime-rl is a codebase for decentralized async RL training at scale☆341Updated this week
- JAX implementation of the Llama 2 model☆218Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆148Updated 3 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆191Updated 10 months ago
- code for training & evaluating Contextual Document Embedding models☆194Updated last month
- ☆200Updated this week
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated last month
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆177Updated 9 months ago
- ☆303Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆165Updated 4 months ago
- ☆134Updated 2 months ago
- Scaling Data-Constrained Language Models☆335Updated 9 months ago
- ☆114Updated 5 months ago
- Normalized Transformer (nGPT)☆183Updated 7 months ago