SumanthRH / tokenizationLinks
A comprehensive deep dive into the world of tokens
☆225Updated last year
Alternatives and similar repositories for tokenization
Users that are interested in tokenization are comparing it to the libraries listed below
Sorting:
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 11 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆268Updated 4 months ago
- ☆94Updated 2 years ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆274Updated last year
- Fast bare-bones BPE for modern tokenizer training☆165Updated 3 months ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆195Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Updated 2 years ago
- Generate textbook-quality synthetic LLM pretraining data☆505Updated last year
- experiments with inference on llama☆104Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated last year
- Let's build better datasets, together!☆262Updated 9 months ago
- batched loras☆346Updated 2 years ago
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆289Updated 7 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- A bagel, with everything.☆325Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆201Updated last year
- Fast & more realistic evaluation of chat language models. Includes leaderboard.☆189Updated last year
- code for training & evaluating Contextual Document Embedding models☆197Updated 4 months ago
- A puzzle to learn about prompting☆135Updated 2 years ago
- awesome synthetic (text) datasets☆297Updated 2 months ago
- An introduction to LLM Sampling☆79Updated 9 months ago
- Experiments on speculative sampling with Llama models☆128Updated 2 years ago
- data cleaning and curation for unstructured text☆328Updated last year
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆105Updated 2 years ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆257Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- A compact LLM pretrained in 9 days by using high quality data☆328Updated 5 months ago
- Pre-training code for Amber 7B LLM☆168Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated 2 weeks ago
- Convert all of libgen to high quality markdown☆253Updated last year