explosion / curated-transformersLinks
π€ A PyTorch library of curated Transformer models and their composable components
β896Updated last year
Alternatives and similar repositories for curated-transformers
Users that are interested in curated-transformers are comparing it to the libraries listed below
Sorting:
- Explore and understand your training and validation data.β845Updated 8 months ago
- Fast & Simple repository for pre-training and fine-tuning T5-style modelsβ1,009Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.β1,348Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100sβ718Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascriptβ599Updated last year
- The repository for the code of the UltraFastBERT paperβ519Updated last year
- Organize your experiments into discrete steps that can be cached and reused throughout the lifetime of your research project.β562Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wiβ¦β349Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"β1,063Updated last year
- Blazing fast framework for fine-tuning similarity learning modelsβ657Updated 5 months ago
- Neural Searchβ363Updated 6 months ago
- Language Modeling with the H3 State Space Modelβ519Updated last year
- An open collection of implementation tips, tricks and resources for training large language modelsβ479Updated 2 years ago
- Convolutions for Sequence Modelingβ898Updated last year
- Prompt programming with FMs.β444Updated last year
- Exact structure out of any language model completion.β512Updated 2 years ago
- Tune any FALCON in 4-bitβ466Updated 2 years ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retrainingβ719Updated last year
- β461Updated last year
- String-to-String Algorithms for Natural Language Processingβ553Updated last year
- A tiny library for coding with large language models.β1,234Updated last year
- Train to 94% on CIFAR-10 in <6.3 seconds on a single A100. Or ~95.79% in ~110 seconds (or less!)β1,282Updated 8 months ago
- A comprehensive deep dive into the world of tokensβ226Updated last year
- Puzzles for exploring transformersβ368Updated 2 years ago
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trickβ293Updated last year
- just a bunch of useful embeddings for scikit-learn pipelinesβ517Updated last month
- β588Updated 2 years ago
- The official implementation of βSophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-trainingββ969Updated last year
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.β381Updated last year
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging Fβ¦β576Updated last year