jiahe7ay / infini-mini-transformerView external linksLinks
This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and training code.
☆58Apr 20, 2024Updated last year
Alternatives and similar repositories for infini-mini-transformer
Users that are interested in infini-mini-transformer are comparing it to the libraries listed below
Sorting:
- ☆13Apr 15, 2024Updated last year
- ☆34Dec 18, 2025Updated last month
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆14Aug 25, 2023Updated 2 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated 11 months ago
- LongQLoRA: Extent Context Length of LLMs Efficiently☆168Nov 12, 2023Updated 2 years ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆168Jun 13, 2024Updated last year
- ☆16Mar 13, 2023Updated 2 years ago
- ☆62Jun 17, 2024Updated last year
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆374Apr 23, 2024Updated last year
- RWKV model implementation☆37Jul 15, 2023Updated 2 years ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- [ACL‘20] Highway Transformer: A Gated Transformer.☆33Dec 5, 2021Updated 4 years ago
- STABILIZING GRADIENTS FOR DEEP NEURAL NETWORKS VIA EFFICIENT SVD PARAMETERIZATION☆16Jun 5, 2018Updated 7 years ago
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- FLASHQuad_pytorch☆68Apr 1, 2022Updated 3 years ago
- [EMNLP 2023] Knowledge Rumination for Pre-trained Language Models☆17Jun 29, 2023Updated 2 years ago
- Sparse Attention with Linear Units☆20Apr 21, 2021Updated 4 years ago
- ☆17Feb 19, 2024Updated last year
- 阅读顺序、Layoutreader☆19May 8, 2025Updated 9 months ago
- ☆20May 30, 2024Updated last year
- AGaLiTe: Approximate Gated Linear Transformers for Online Reinforcement Learning (Published in TMLR)☆23Oct 15, 2024Updated last year
- Code for ICML 2020 paper: Do RNN and LSTM have Long Memory?☆17Jan 6, 2021Updated 5 years ago
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- Repository for augmenting data in forms, invoices and receipts for document image understanding☆17May 6, 2021Updated 4 years ago
- Official Repository for Efficient Linear-Time Attention Transformers.☆18Jun 2, 2024Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Mar 12, 2024Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆445Oct 16, 2024Updated last year
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Apr 12, 2024Updated last year
- ☆27Jul 28, 2025Updated 6 months ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆50Jun 16, 2023Updated 2 years ago
- 一些RNN的实现☆52Mar 29, 2023Updated 2 years ago
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆665Jun 1, 2024Updated last year
- ☆22Dec 15, 2023Updated 2 years ago
- A collection of trading settings for the Galileo FX trading robot. These settings are designed to optimize trading strategies across vari…☆13Jan 27, 2025Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆484Mar 19, 2024Updated last year