explosion / curated-transformers
🤖 A PyTorch library of curated Transformer models and their composable components
☆884Updated 11 months ago
Alternatives and similar repositories for curated-transformers:
Users that are interested in curated-transformers are comparing it to the libraries listed below
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆999Updated 7 months ago
- Neural Search☆352Updated last week
- Creative interactive views of any dataset.☆837Updated 2 months ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆574Updated 8 months ago
- An interactive exploration of Transformer programming.☆261Updated last year
- just a bunch of useful embeddings for scikit-learn pipelines☆484Updated 2 months ago
- String-to-String Algorithms for Natural Language Processing☆541Updated 7 months ago
- The repository for the code of the UltraFastBERT paper☆517Updated 11 months ago
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆381Updated 9 months ago
- A repository for research on medium sized language models.☆493Updated 2 months ago
- Puzzles for exploring transformers☆333Updated last year
- 🦙 Integrating LLMs into structured NLP pipelines☆1,213Updated 2 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆709Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.☆1,324Updated 9 months ago
- What would you do with 1000 H100s...☆1,016Updated last year
- Language Modeling with the H3 State Space Model☆516Updated last year
- A tiny library for coding with large language models.☆1,225Updated 8 months ago
- Blazing fast framework for fine-tuning similarity learning models☆656Updated 2 months ago
- Organize your experiments into discrete steps that can be cached and reused throughout the lifetime of your research project.☆550Updated 9 months ago
- ☆412Updated last year
- Prompt programming with FMs.☆440Updated 7 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆690Updated 11 months ago
- Neural Search☆327Updated 9 months ago
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,059Updated last year
- Convolutions for Sequence Modeling☆877Updated 9 months ago
- Inference code for Persimmon-8B☆415Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆343Updated 7 months ago
- utilities for decoding deep representations (like sentence embeddings) back to text☆777Updated last month
- An open collection of implementation tips, tricks and resources for training large language models☆471Updated 2 years ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆198Updated 10 months ago