explosion / curated-transformersLinks
🤖 A PyTorch library of curated Transformer models and their composable components
☆891Updated last year
Alternatives and similar repositories for curated-transformers
Users that are interested in curated-transformers are comparing it to the libraries listed below
Sorting:
- Creative interactive views of any dataset.☆841Updated 5 months ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆584Updated 11 months ago
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,005Updated 10 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆714Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,060Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.☆1,335Updated last year
- Language Modeling with the H3 State Space Model☆519Updated last year
- The repository for the code of the UltraFastBERT paper☆516Updated last year
- Organize your experiments into discrete steps that can be cached and reused throughout the lifetime of your research project.☆561Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆698Updated last year
- Convolutions for Sequence Modeling☆890Updated last year
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆381Updated last year
- Inference code for Persimmon-8B☆415Updated last year
- Neural Search☆358Updated 3 months ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆345Updated 10 months ago
- just a bunch of useful embeddings for scikit-learn pipelines☆500Updated 2 months ago
- Tune any FALCON in 4-bit☆467Updated last year
- An open collection of implementation tips, tricks and resources for training large language models☆475Updated 2 years ago
- What would you do with 1000 H100s...☆1,055Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆554Updated 5 months ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆801Updated last week
- Blazing fast framework for fine-tuning similarity learning models☆656Updated 2 months ago
- data cleaning and curation for unstructured text☆327Updated 10 months ago
- Task-based datasets, preprocessing, and evaluation for sequence models.☆576Updated last month
- Puzzles for exploring transformers☆349Updated 2 years ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated last month
- String-to-String Algorithms for Natural Language Processing☆549Updated 10 months ago
- ☆415Updated last year
- Exact structure out of any language model completion.☆511Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,113Updated last year