vmarinowski / infini-attention
An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'
☆46Updated 5 months ago
Alternatives and similar repositories for infini-attention:
Users that are interested in infini-attention are comparing it to the libraries listed below
- A repository for research on medium sized language models.☆76Updated 7 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated this week
- This is the official repository for Inheritune.☆109Updated 3 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆148Updated last month
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 5 months ago
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆74Updated 8 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆76Updated last year
- ☆31Updated 7 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆96Updated 3 months ago
- ☆69Updated 5 months ago
- Train, tune, and infer Bamba model☆76Updated this week
- Implementation of the Mamba SSM with hf_integration.☆56Updated 4 months ago
- GoldFinch and other hybrid transformer components☆42Updated 5 months ago
- ☆47Updated 4 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆23Updated 4 months ago
- Unofficial Implementation of Evolutionary Model Merging☆33Updated 9 months ago
- ☆90Updated this week
- ☆43Updated 2 months ago
- QuIP quantization☆48Updated 10 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated 10 months ago
- RWKV-7: Surpassing GPT☆71Updated 2 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆53Updated 4 months ago
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆20Updated this week
- ☆69Updated this week
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆115Updated 4 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆56Updated 2 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated 2 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆139Updated 4 months ago
- ☆46Updated 2 months ago
- This is a simple torch implementation of the high performance Multi-Query Attention☆16Updated last year