This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and training code.
☆59Apr 20, 2024Updated last year
Alternatives and similar repositories for infini-mini-transformer
Users that are interested in infini-mini-transformer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- 这是一个一键让小参数大模型进行角色扮演的项目,从数据构成和训练都包含在这项目中☆25Mar 31, 2024Updated last year
- ☆13Apr 15, 2024Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 6 months ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆496May 1, 2025Updated 10 months ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆168Nov 12, 2023Updated 2 years ago
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆14Aug 25, 2023Updated 2 years ago
- Repository for augmenting data in forms, invoices and receipts for document image understanding☆17May 6, 2021Updated 4 years ago
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated last year
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆375Apr 23, 2024Updated last year
- Megatron LM 11B on Huggingface Transformers☆27Jul 11, 2021Updated 4 years ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆169Jun 13, 2024Updated last year
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- FLASHQuad_pytorch☆68Apr 1, 2022Updated 3 years ago
- Advanced Formal Language Theory (263-5352-00L; Frühjahr 2023)☆10Feb 21, 2023Updated 3 years ago
- ☆36Dec 18, 2025Updated 3 months ago
- [ACL‘20] Highway Transformer: A Gated Transformer.☆33Dec 5, 2021Updated 4 years ago
- AdaLoGN: Adaptive Logic Graph Network for Reasoning-Based Machine Reading Comprehension (ACL 2022)☆27May 20, 2022Updated 3 years ago
- ☆20May 30, 2024Updated last year
- RWKV model implementation☆37Jul 15, 2023Updated 2 years ago
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- My fork os allen AI's OLMo for educational purposes.☆28Dec 5, 2024Updated last year
- ☆62Jun 17, 2024Updated last year
- [EMNLP 2023] Knowledge Rumination for Pre-trained Language Models☆17Jun 29, 2023Updated 2 years ago
- Sparse Attention with Linear Units☆20Apr 21, 2021Updated 4 years ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Apr 12, 2024Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Mar 12, 2024Updated 2 years ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆490Mar 19, 2024Updated 2 years ago
- ☆16Mar 13, 2023Updated 3 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Mar 22, 2026Updated last week
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- 阅读顺序、Layoutreader☆19May 8, 2025Updated 10 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆450Oct 16, 2024Updated last year
- Official Repository for Efficient Linear-Time Attention Transformers.☆18Jun 2, 2024Updated last year
- 更纯粹、更高压缩率的Tokenizer☆488Nov 27, 2024Updated last year
- Online Preference Alignment for Language Models via Count-based Exploration☆17Jan 14, 2025Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- ☆13Jun 19, 2021Updated 4 years ago