Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTORCH
☆59Apr 20, 2026Updated 2 weeks ago
Alternatives and similar repositories for Infini-attention
Users that are interested in Infini-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆55Aug 19, 2024Updated last year
- OmniByteFormer is a generalized Transformer model that can process any type of data by converting it into byte sequences, bypassing tradi…☆16Apr 13, 2026Updated 3 weeks ago
- Brainwave is a state-of-the-art neural decoder that transforms electroencephalogram (EEG) and brain signals into multimodal outputs inclu…☆14Oct 6, 2025Updated 7 months ago
- ☆16Mar 22, 2023Updated 3 years ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Mar 18, 2023Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- CogNetX is an advanced, multimodal neural network architecture inspired by human cognition. It integrates speech, vision, and video proce…☆21Updated this week
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆91May 9, 2024Updated last year
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆376Apr 23, 2024Updated 2 years ago
- Implementation of the LDP module block in PyTorch and Zeta from the paper: "MobileVLM: A Fast, Strong and Open Vision Language Assistant …☆15Mar 11, 2024Updated 2 years ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆299May 4, 2024Updated 2 years ago
- Mamba support for transformer lens☆20Sep 17, 2024Updated last year
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆24Apr 30, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Pytorch implementation of https://arxiv.org/html/2404.07143v1☆21Apr 13, 2024Updated 2 years ago
- Transform unstructured documents into actionable, structured data with enterprise-grade precision and reliability, ready for large-scale …☆20Oct 13, 2025Updated 6 months ago
- To mitigate position bias in LLMs, especially in long-context scenarios, we scale only one dimension of LLMs, reducing position bias and …☆11Jun 18, 2024Updated last year
- An implementation of the paper Brain2Qwerty that translates brain EEG data into text for reading people's brains. There was no code so we…☆23Feb 9, 2025Updated last year
- Learning to Model Editing Processes☆26Aug 3, 2025Updated 9 months ago
- ☆14Nov 21, 2025Updated 5 months ago
- ☆11Oct 11, 2023Updated 2 years ago
- Build high-performance AI models with modular building blocks☆590Updated this week
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Mamba R1 represents a novel architecture that combines the efficiency of Mamba's state space models with the scalability of Mixture of Ex…☆25Oct 13, 2025Updated 6 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆41Oct 11, 2024Updated last year
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆26Oct 27, 2022Updated 3 years ago
- Train a production grade GPT in less than 400 lines of code. Better than Karpathy's verison and GIGAGPT☆17Apr 13, 2026Updated 3 weeks ago
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago
- Recursive Bayesian Networks☆11May 11, 2025Updated 11 months ago
- triton ver of gqa flash attn, based on the tutorial☆12Aug 4, 2024Updated last year
- Deep neural models for core NLP tasks☆13Nov 9, 2017Updated 8 years ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆344Feb 23, 2025Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Variable-order CRFs with structure learning☆17Aug 1, 2024Updated last year
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆28Updated this week
- ☆12Jan 29, 2021Updated 5 years ago
- ☆12Mar 7, 2022Updated 4 years ago
- code for promptCSE, emnlp 2022☆11Apr 10, 2023Updated 3 years ago
- source code for NAACL2022 main conference "Dynamic Programming in Rank Space: Scaling Structured Inference with Low-Rank HMMs and PCFGs"☆10Sep 26, 2022Updated 3 years ago
- Implementation Code for "LLM-based Medical Assistant Personalization with Short- and Long-Term Memory Coordination"☆14Apr 25, 2025Updated last year