This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and training code.
☆59Apr 20, 2024Updated last year
Alternatives and similar repositories for infini-mini-transformer
Users that are interested in infini-mini-transformer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆13Apr 15, 2024Updated 2 years ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 7 months ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- LongQLoRA: Extent Context Length of LLMs Efficiently☆168Nov 12, 2023Updated 2 years ago
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆14Aug 25, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated last year
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆376Apr 23, 2024Updated last year
- Megatron LM 11B on Huggingface Transformers☆27Jul 11, 2021Updated 4 years ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆170Jun 13, 2024Updated last year
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- FLASHQuad_pytorch☆68Apr 1, 2022Updated 4 years ago
- Advanced Formal Language Theory (263-5352-00L; Frühjahr 2023)☆10Feb 21, 2023Updated 3 years ago
- ☆37Apr 1, 2026Updated 2 weeks ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- AdaLoGN: Adaptive Logic Graph Network for Reasoning-Based Machine Reading Comprehension (ACL 2022)☆27May 20, 2022Updated 3 years ago
- ☆20May 30, 2024Updated last year
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- STABILIZING GRADIENTS FOR DEEP NEURAL NETWORKS VIA EFFICIENT SVD PARAMETERIZATION☆16Jun 5, 2018Updated 7 years ago
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- My fork os allen AI's OLMo for educational purposes.☆28Dec 5, 2024Updated last year
- [EMNLP 2023] Knowledge Rumination for Pre-trained Language Models☆17Jun 29, 2023Updated 2 years ago
- Sparse Attention with Linear Units☆20Apr 21, 2021Updated 4 years ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Apr 12, 2024Updated 2 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Mar 12, 2024Updated 2 years ago
- ☆16Mar 13, 2023Updated 3 years ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆497Mar 19, 2024Updated 2 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Mar 30, 2026Updated 2 weeks ago
- 阅读顺序、Layoutreader☆19May 8, 2025Updated 11 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆450Oct 16, 2024Updated last year
- Official Repository for Efficient Linear-Time Attention Transformers.☆18Jun 2, 2024Updated last year
- 更纯粹、更高压缩率的Tokenizer☆488Nov 27, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated 3 months ago
- ☆13Jun 19, 2021Updated 4 years ago
- This repository contains the code used for Ordered Memory paper☆29Jan 12, 2020Updated 6 years ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆664Jun 1, 2024Updated last year
- AGaLiTe: Approximate Gated Linear Transformers for Online Reinforcement Learning (Published in TMLR)☆23Oct 15, 2024Updated last year
- ☆27Feb 26, 2026Updated last month
- LongAttn :Selecting Long-context Training Data via Token-level Attention☆15Jul 16, 2025Updated 9 months ago