recursal / GoldFinch-paper
GoldFinch and other hybrid transformer components
☆39Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for GoldFinch-paper
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆30Updated 3 months ago
- A repository for research on medium sized language models.☆74Updated 5 months ago
- ☆62Updated 3 months ago
- Using FlexAttention to compute attention with different masking patterns☆40Updated last month
- ☆27Updated 5 months ago
- [ICML 24 NGSM workshop] Associative Recurrent Memory Transformer implementation and scripts for training and evaluating☆31Updated this week
- Triton Implementation of HyperAttention Algorithm☆46Updated 11 months ago
- Here we will test various linear attention designs.☆56Updated 6 months ago
- ☆53Updated 3 weeks ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆24Updated 7 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆44Updated 10 months ago
- Linear Attention Sequence Parallelism (LASP)☆64Updated 5 months ago
- The official implementation of MARS: Unleashing the Power of Variance Reduction for Training Large Models☆65Updated this week
- ☆54Updated last month
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated last week
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆35Updated 11 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- ☆49Updated 8 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆84Updated last week
- RWKV-7: Surpassing GPT☆44Updated this week
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated 5 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆19Updated 2 months ago
- ☆35Updated 3 weeks ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 7 months ago
- Collection of autoregressive model implementation☆67Updated this week
- ☆45Updated 2 months ago
- ☆40Updated 2 weeks ago
- Training hybrid models for dummies.☆15Updated 3 weeks ago