haileyschoelkopf / triton-indexLinks
See https://github.com/cuda-mode/triton-index/ instead!
☆10Updated last year
Alternatives and similar repositories for triton-index
Users that are interested in triton-index are comparing it to the libraries listed below
Sorting:
- ☆13Updated 4 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- ☆39Updated last year
- Utilities for Training Very Large Models☆58Updated last year
- ☆52Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆60Updated this week
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆27Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated 2 years ago
- ☆20Updated 2 years ago
- Code for the note "NF4 Isn't Information Theoretically Optimal (and that's Good)☆21Updated 2 years ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆10Updated 2 years ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- Minimum Description Length probing for neural network representations☆20Updated 8 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆58Updated 3 years ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆114Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- ☆34Updated last year
- ☆18Updated last year
- Experiments for efforts to train a new and improved t5☆75Updated last year
- Make triton easier☆47Updated last year
- ☆31Updated 3 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- ☆31Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year