haileyschoelkopf / triton-indexLinks
See https://github.com/cuda-mode/triton-index/ instead!
☆11Updated last year
Alternatives and similar repositories for triton-index
Users that are interested in triton-index are comparing it to the libraries listed below
Sorting:
- ☆13Updated last month
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- ☆39Updated last year
- ☆20Updated 2 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆28Updated last year
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Code for the note "NF4 Isn't Information Theoretically Optimal (and that's Good)☆21Updated 2 years ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Updated 3 years ago
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- ☆53Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated last year
- Utilities for Training Very Large Models☆58Updated last year
- ☆50Updated last year
- Minimum Description Length probing for neural network representations☆20Updated 11 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Updated last year
- My explorations into editing the knowledge and memories of an attention network☆35Updated 3 years ago
- ☆32Updated 2 years ago
- Embedding Recycling for Language models☆38Updated 2 years ago
- ☆19Updated 3 years ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆79Updated last year
- train with kittens!☆63Updated last year
- ☆69Updated last year
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆10Updated 2 years ago
- JAX/Flax implementation of the Hyena Hierarchy☆34Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆35Updated 2 years ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Updated 2 years ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.☆49Updated 2 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆116Updated last year