Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M context keypass retrieval
☆91May 9, 2024Updated last year
Alternatives and similar repositories for Infini-Attention
Users that are interested in Infini-Attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆376Apr 23, 2024Updated 2 years ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆299May 4, 2024Updated 2 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆59Apr 20, 2026Updated 2 weeks ago
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆55Aug 19, 2024Updated last year
- Pytorch implementation of https://arxiv.org/html/2404.07143v1☆21Apr 13, 2024Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Code for Blog Post: Can Better Cold-Start Strategies Improve RL Training for LLMs?☆20Mar 9, 2025Updated last year
- DPO, but faster 🚀☆52Dec 6, 2024Updated last year
- To mitigate position bias in LLMs, especially in long-context scenarios, we scale only one dimension of LLMs, reducing position bias and …☆11Jun 18, 2024Updated last year
- The repository of paper Personalized Multimodal Response Generation with Large Language Models☆18Jun 28, 2024Updated last year
- Training framework for Large Behavioral Models☆28Sep 17, 2025Updated 7 months ago
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Jun 7, 2024Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆450Oct 16, 2024Updated last year
- an implementation of paper"Retentive Network: A Successor to Transformer for Large Language Models" https://arxiv.org/pdf/2307.08621.pdf☆11Jul 25, 2023Updated 2 years ago
- ☆53May 13, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Official implementation of ECCV24 paper: POA☆24Aug 8, 2024Updated last year
- Implementation of a Hierarchical Mamba as described in the paper: "Hierarchical State Space Models for Continuous Sequence-to-Sequence Mo…☆15Nov 11, 2024Updated last year
- ☆23Mar 7, 2025Updated last year
- BlockRank makes LLMs efficient and scalable for RAG and in-context ranking☆44Dec 12, 2025Updated 4 months ago
- Modular task agnostic training pipeline using LFM2 from Liquid AI with unsloth.☆16Sep 13, 2025Updated 7 months ago
- ☆16Jul 29, 2025Updated 9 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆41Oct 11, 2024Updated last year
- Implementation of "Decoding-time Realignment of Language Models", ICML 2024.☆21Jun 17, 2024Updated last year
- An implementation of LazyLLM token pruning for LLaMa 2 model family.☆13Jan 6, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- [ICML-2025] We introduce Lie group Relative position Encodings (LieRE) that goes beyond RoPE in supporting n-dimensional inputs.☆14Aug 8, 2025Updated 8 months ago
- Tools and Utils for Experiments (TUX)☆16Jan 28, 2025Updated last year
- ☆53Oct 29, 2024Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆88Mar 14, 2022Updated 4 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆59Apr 20, 2024Updated 2 years ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆183Jul 12, 2024Updated last year
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models☆78Oct 16, 2024Updated last year
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆57Mar 26, 2024Updated 2 years ago
- Gemma 2B with 10M context length using Infini-attention.☆936May 12, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Offical code repository for PromptMix: A Class Boundary Augmentation Method for Large Language Model Distillation, EMNLP 2023☆12Dec 13, 2023Updated 2 years ago
- Metis-RISE: RL Incentivizes and SFT Enhances Multimodal Reasoning Model Learning☆22Jun 26, 2025Updated 10 months ago
- Extract a single expert from a Mixture Of Experts model using slerp interpolation.☆19May 26, 2024Updated last year
- ☆13Nov 30, 2018Updated 7 years ago
- triton ver of gqa flash attn, based on the tutorial☆12Aug 4, 2024Updated last year
- Research work aimed at addressing the problem of modeling infinite-length context☆48Dec 18, 2025Updated 4 months ago
- ☆70Jul 8, 2025Updated 9 months ago