notarussianteenager / srf-attentionLinks
Simplex Random Feature attention, in PyTorch
☆74Updated last year
Alternatives and similar repositories for srf-attention
Users that are interested in srf-attention are comparing it to the libraries listed below
Sorting:
- ☆61Updated last year
- ☆38Updated 11 months ago
- ☆22Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- ☆27Updated 11 months ago
- Cerule - A Tiny Mighty Vision Model☆66Updated 9 months ago
- ☆49Updated last year
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.☆58Updated last year
- Sparse autoencoders for Contra text embedding models☆25Updated last year
- Just large language models. Hackable, with as little abstraction as possible. Done for my own purposes, feel free to rip.☆44Updated last year
- research impl of Native Sparse Attention (2502.11089)☆54Updated 4 months ago
- ☆21Updated 7 months ago
- inference code for mixtral-8x7b-32kseqlen☆100Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- ☆47Updated last year
- ☆63Updated 9 months ago
- A synthetic story narration dataset to study small audio LMs.☆32Updated last year
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated 4 months ago
- Collection of autoregressive model implementation☆85Updated 2 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- look how they massacred my boy☆63Updated 8 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- smolLM with Entropix sampler on pytorch☆150Updated 7 months ago
- ☆20Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*