PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" (https://arxiv.org/abs/2404.07143)
☆299May 4, 2024Updated last year
Alternatives and similar repositories for infini-transformer
Users that are interested in infini-transformer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆376Apr 23, 2024Updated last year
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆91May 9, 2024Updated last year
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆55Aug 19, 2024Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆59Apr 20, 2024Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Mar 30, 2026Updated 2 weeks ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆165Apr 13, 2025Updated last year
- My fork os allen AI's OLMo for educational purposes.☆28Dec 5, 2024Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆664Jun 1, 2024Updated last year
- Pytorch implementation of https://arxiv.org/html/2404.07143v1☆21Apr 13, 2024Updated 2 years ago
- Implementation of Infini-Transformer in Pytorch☆112Jan 4, 2025Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Jun 15, 2024Updated last year
- Gemma 2B with 10M context length using Infini-attention.☆935May 12, 2024Updated last year
- Reference implementation of Megalodon 7B model☆526May 17, 2025Updated 11 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆450Oct 16, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Jun 7, 2024Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆153Jul 20, 2024Updated last year
- Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)☆76Oct 19, 2024Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆987Jul 23, 2024Updated last year
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆284Oct 28, 2025Updated 5 months ago
- minimal diffusion transformer in pytorch.☆17Oct 6, 2024Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆241May 26, 2024Updated last year
- Yet another frontend for LLM, written using .NET and WinUI 3☆11Sep 14, 2025Updated 7 months ago
- [ICLR 2025] "Training LMs on Synthetic Edit Sequences Improves Code Synthesis" (Piterbarg, Pinto, Fergus)☆19Feb 11, 2025Updated last year
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,203Apr 8, 2026Updated last week
- Token Omission Via Attention☆127Oct 13, 2024Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆258Oct 30, 2024Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Oct 15, 2024Updated last year
- Testing and evaluating the capabilities of Vision-Language models (PaliGemma) in performing computer vision tasks such as object detectio…☆88May 29, 2024Updated last year
- To assess the longtext capabilities more comprehensively, we propose Needle-in-a-Haystack PLUS, which shifts the focus from simple fact r…☆13Mar 4, 2024Updated 2 years ago
- smolLM with Entropix sampler on pytorch☆149Oct 31, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆455May 13, 2025Updated 11 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆465Apr 18, 2024Updated 2 years ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,683Oct 28, 2024Updated last year
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Dec 30, 2023Updated 2 years ago
- 【CVPRW'23】First Place Solution to the CVPR'2023 AQTC Challenge☆15Jul 18, 2023Updated 2 years ago
- Collection of autoregressive model implementation☆85Feb 23, 2026Updated last month
- Official implementation of Half-Quadratic Quantization (HQQ)☆928Feb 26, 2026Updated last month
- Backtracing: Retrieving the Cause of the Query, EACL 2024 Long Paper, Findings.☆92Jul 21, 2024Updated last year