kyegomez / Infini-attentionView external linksLinks
Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTORCH
☆58Updated this week
Alternatives and similar repositories for Infini-attention
Users that are interested in Infini-attention are comparing it to the libraries listed below
Sorting:
- Brainwave is a state-of-the-art neural decoder that transforms electroencephalogram (EEG) and brain signals into multimodal outputs inclu…☆14Oct 6, 2025Updated 4 months ago
- OmniByteFormer is a generalized Transformer model that can process any type of data by converting it into byte sequences, bypassing tradi…☆15Updated this week
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆54Aug 19, 2024Updated last year
- ☆15Mar 22, 2023Updated 2 years ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Mar 18, 2023Updated 2 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- A swarm of LLM agents that will help you test, document, and productionize your code!☆16Feb 7, 2026Updated last week
- Implementation of the LDP module block in PyTorch and Zeta from the paper: "MobileVLM: A Fast, Strong and Open Vision Language Assistant …☆15Mar 11, 2024Updated last year
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆86May 9, 2024Updated last year
- Pytorch Implementation of the Model from "MIRASOL3B: A MULTIMODAL AUTOREGRESSIVE MODEL FOR TIME-ALIGNED AND CONTEXTUAL MODALITIES"☆26Jan 27, 2025Updated last year
- Train a production grade GPT in less than 400 lines of code. Better than Karpathy's verison and GIGAGPT☆16Feb 6, 2026Updated last week
- An implementation of the paper Brain2Qwerty that translates brain EEG data into text for reading people's brains. There was no code so we…☆22Feb 9, 2025Updated last year
- Mamba support for transformer lens☆19Sep 17, 2024Updated last year
- ATLAS is a sophisticated real-time risk analysis system designed for institutional-grade market risk assessment. Built with high-frequenc…☆17Jan 13, 2025Updated last year
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling without…☆19Mar 15, 2025Updated 11 months ago
- CogNetX is an advanced, multimodal neural network architecture inspired by human cognition. It integrates speech, vision, and video proce…☆19Jan 31, 2026Updated 2 weeks ago
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆374Apr 23, 2024Updated last year
- Learning to Model Editing Processes☆26Aug 3, 2025Updated 6 months ago
- ☆20May 30, 2024Updated last year
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Apr 30, 2025Updated 9 months ago
- Transform unstructured documents into actionable, structured data with enterprise-grade precision and reliability, ready for large-scale …☆20Oct 13, 2025Updated 4 months ago
- Mamba R1 represents a novel architecture that combines the efficiency of Mamba's state space models with the scalability of Mixture of Ex…☆25Oct 13, 2025Updated 4 months ago
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆26Updated this week
- Official code for the paper "Attention as a Hypernetwork"☆47Jun 22, 2024Updated last year
- The Swarm Ecosystem☆26Aug 1, 2024Updated last year
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆294May 4, 2024Updated last year
- Statistical discontinuous constituent parsing☆11Feb 15, 2018Updated 8 years ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆41Oct 11, 2024Updated last year
- ☆24Sep 25, 2024Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆92Jul 17, 2025Updated 6 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆26Oct 27, 2022Updated 3 years ago
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago
- ☆11Oct 11, 2023Updated 2 years ago
- This repository serves as a central hub for discovering tools and services focused on automated prompt engineering. Whether you're lookin…☆14Oct 11, 2024Updated last year
- To mitigate position bias in LLMs, especially in long-context scenarios, we scale only one dimension of LLMs, reducing position bias and …☆11Jun 18, 2024Updated last year
- Source code for "N-ary Constituent Tree Parsing with Recursive Semi-Markov Model" published at ACL 2021☆10May 27, 2021Updated 4 years ago
- Implementation of the Pairformer model used in AlphaFold 3☆14Updated this week