explosion / curated-transformers
π€ A PyTorch library of curated Transformer models and their composable components
β866Updated 7 months ago
Related projects β
Alternatives and complementary repositories for curated-transformers
- Fast & Simple repository for pre-training and fine-tuning T5-style modelsβ970Updated 3 months ago
- Creative interactive views of any dataset.β829Updated 8 months ago
- Neural Searchβ344Updated 5 months ago
- The repository for the code of the UltraFastBERT paperβ514Updated 7 months ago
- What would you do with 1000 H100s...β903Updated 10 months ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascriptβ551Updated 4 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100sβ702Updated last year
- Organize your experiments into discrete steps that can be cached and reused throughout the lifetime of your research project.β534Updated 5 months ago
- Blazing fast framework for fine-tuning similarity learning modelsβ643Updated last month
- A tool to analyze and debug neural networks in pytorch. Use a GUI to traverse the computation graph and view the data from many differentβ¦β270Updated 3 weeks ago
- Inference code for Persimmon-8Bβ416Updated last year
- Puzzles for exploring transformersβ325Updated last year
- An interactive exploration of Transformer programming.β246Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.β1,296Updated 5 months ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wiβ¦β334Updated 3 months ago
- Convolutions for Sequence Modelingβ869Updated 5 months ago
- Exact structure out of any language model completion.β502Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"β1,055Updated 8 months ago
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trickβ287Updated 11 months ago
- String-to-String Algorithms for Natural Language Processingβ536Updated 3 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retrainingβ675Updated 7 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jaxβ516Updated this week
- An open collection of implementation tips, tricks and resources for training large language modelsβ460Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"β537Updated 6 months ago
- just a bunch of useful embeddingsβ466Updated 2 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backendsβ811Updated this week
- Website for hosting the Open Foundation Models Cheat Sheet.β257Updated 4 months ago
- π¦ XβLLM: Cutting Edge & Easy LLM Finetuningβ381Updated 10 months ago