gautierdag / bpeasy
Fast bare-bones BPE for modern tokenizer training
☆142Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for bpeasy
- code for training & evaluating Contextual Document Embedding models☆92Updated this week
- A puzzle to learn about prompting☆119Updated last year
- RuLES: a benchmark for evaluating rule-following in language models☆210Updated last month
- Manage scalable open LLM inference endpoints in Slurm clusters☆237Updated 3 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆219Updated last week
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆176Updated 5 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆251Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆83Updated last week
- ☆91Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆175Updated 2 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆92Updated last month
- Understand and test language model architectures on synthetic tasks.☆161Updated 6 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆273Updated last month
- Extract full next-token probabilities via language model APIs☆228Updated 8 months ago
- A comprehensive deep dive into the world of tokens☆213Updated 4 months ago
- Best practices & guides on how to write distributed pytorch training code☆278Updated this week
- Sparse autoencoders☆333Updated 2 weeks ago
- Long context evaluation for large language models☆185Updated this week
- ☆99Updated 3 months ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆194Updated 6 months ago
- Simple Transformer in Jax☆115Updated 4 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆150Updated 2 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆171Updated 3 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆160Updated last month
- A MAD laboratory to improve AI architecture designs 🧪☆95Updated 6 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆194Updated 6 months ago
- A bagel, with everything.☆312Updated 6 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆202Updated last week
- An Open Source Toolkit For LLM Distillation☆350Updated last month
- ☆445Updated last week