explosion / curated-transformersLinks
π€ A PyTorch library of curated Transformer models and their composable components
β888Updated last year
Alternatives and similar repositories for curated-transformers
Users that are interested in curated-transformers are comparing it to the libraries listed below
Sorting:
- Fast & Simple repository for pre-training and fine-tuning T5-style modelsβ1,004Updated 9 months ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascriptβ581Updated 11 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100sβ711Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.β1,332Updated 11 months ago
- Neural Searchβ356Updated 2 months ago
- Puzzles for exploring transformersβ347Updated 2 years ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retrainingβ697Updated last year
- Creative interactive views of any dataset.β838Updated 5 months ago
- The repository for the code of the UltraFastBERT paperβ514Updated last year
- An interactive exploration of Transformer programming.β264Updated last year
- An open collection of implementation tips, tricks and resources for training large language modelsβ473Updated 2 years ago
- Organize your experiments into discrete steps that can be cached and reused throughout the lifetime of your research project.β561Updated last year
- Language Modeling with the H3 State Space Modelβ518Updated last year
- Exact structure out of any language model completion.β509Updated last year
- What would you do with 1000 H100s...β1,048Updated last year
- just a bunch of useful embeddings for scikit-learn pipelinesβ499Updated 2 months ago
- A tool to analyze and debug neural networks in pytorch. Use a GUI to traverse the computation graph and view the data from many differentβ¦β287Updated 5 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.β267Updated 3 weeks ago
- Generate textbook-quality synthetic LLM pretraining dataβ498Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wiβ¦β345Updated 10 months ago
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"β1,059Updated last year
- β412Updated last year
- LLM papers I'm reading, mostly on inference and model compressionβ729Updated last year
- git extension for {collaborative, communal, continual} model developmentβ212Updated 6 months ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 linesβ196Updated last year
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.β380Updated 11 months ago
- A benchmark to evaluate language models on questions I've previously asked them to solve.β1,011Updated last month
- Convolutions for Sequence Modelingβ883Updated 11 months ago
- A repository for research on medium sized language models.β495Updated 3 weeks ago
- A tiny library for coding with large language models.β1,232Updated 10 months ago