SumanthRH / tokenization
A comprehensive deep dive into the world of tokens
☆221Updated 9 months ago
Alternatives and similar repositories for tokenization:
Users that are interested in tokenization are comparing it to the libraries listed below
- ☆92Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 5 months ago
- data cleaning and curation for unstructured text☆329Updated 8 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆254Updated 9 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆105Updated 6 months ago
- A bagel, with everything.☆318Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆498Updated last year
- 📝 Reference-Free automatic summarization evaluation with potential hallucination detection☆100Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆198Updated 11 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆266Updated last month
- ☆151Updated 4 months ago
- experiments with inference on llama☆104Updated 10 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 8 months ago
- code for training & evaluating Contextual Document Embedding models☆180Updated 2 months ago
- The official evaluation suite and dynamic data release for MixEval.☆234Updated 5 months ago
- Highly commented implementations of Transformers in PyTorch☆135Updated last year
- ☆524Updated 7 months ago
- awesome synthetic (text) datasets☆267Updated 5 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆99Updated last year
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching o…☆128Updated 3 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆93Updated last month
- A puzzle to learn about prompting☆126Updated last year
- Train your own SOTA deductive reasoning model☆83Updated last month
- LoRA and DoRA from Scratch Implementations☆200Updated last year
- ☆153Updated 8 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆186Updated 8 months ago
- Full finetuning of large language models without large memory requirements☆93Updated last year
- ☆112Updated this week
- This is our own implementation of 'Layer Selective Rank Reduction'☆233Updated 10 months ago