dingo-actual / infini-transformerView external linksLinks
PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" (https://arxiv.org/abs/2404.07143)
☆294May 4, 2024Updated last year
Alternatives and similar repositories for infini-transformer
Users that are interested in infini-transformer are comparing it to the libraries listed below
Sorting:
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆374Apr 23, 2024Updated last year
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆86May 9, 2024Updated last year
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆54Aug 19, 2024Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Apr 20, 2024Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Apr 13, 2025Updated 10 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Feb 9, 2026Updated last week
- My fork os allen AI's OLMo for educational purposes.☆28Dec 5, 2024Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆665Jun 1, 2024Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆445Oct 16, 2024Updated last year
- Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)☆76Oct 19, 2024Updated last year
- Reference implementation of Megalodon 7B model☆528May 17, 2025Updated 9 months ago
- Implementation of Infini-Transformer in Pytorch☆112Jan 4, 2025Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆986Jul 23, 2024Updated last year
- 【CVPRW'23】First Place Solution to the CVPR'2023 AQTC Challenge☆15Jul 18, 2023Updated 2 years ago
- Gemma 2B with 10M context length using Infini-attention.☆935May 12, 2024Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Jun 7, 2024Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆240May 26, 2024Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Jun 15, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,672Oct 28, 2024Updated last year
- Latent Large Language Models☆19Aug 24, 2024Updated last year
- Beyond Language Models: Byte Models are Digital World Simulators☆334Jun 6, 2024Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆252Oct 30, 2024Updated last year
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆452May 13, 2025Updated 9 months ago
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Dec 30, 2023Updated 2 years ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆913Dec 18, 2025Updated last month
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,188Sep 30, 2025Updated 4 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆150Jul 20, 2024Updated last year
- Official repository for ORPO☆471May 31, 2024Updated last year
- minimal diffusion transformer in pytorch.☆16Oct 6, 2024Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Oct 15, 2024Updated last year
- Ring attention implementation with flash attention☆980Sep 10, 2025Updated 5 months ago
- ☆82Apr 16, 2024Updated last year
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆278Oct 28, 2025Updated 3 months ago
- Token Omission Via Attention☆128Oct 13, 2024Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆115Apr 22, 2025Updated 9 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆203Jul 17, 2024Updated last year
- Testing and evaluating the capabilities of Vision-Language models (PaliGemma) in performing computer vision tasks such as object detectio…☆85May 29, 2024Updated last year
- Make triton easier☆50Jun 12, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year