torphix / infini-attentionLinks
Pytorch implementation of https://arxiv.org/html/2404.07143v1
☆21Updated last year
Alternatives and similar repositories for infini-attention
Users that are interested in infini-attention are comparing it to the libraries listed below
Sorting:
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆118Updated 6 months ago
- ☆75Updated last year
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆97Updated last year
- ☆56Updated last year
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆211Updated 11 months ago
- mllm-npu: training multimodal large language models on Ascend NPUs☆94Updated last year
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆128Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆126Updated 10 months ago
- ☆85Updated 8 months ago
- ☆186Updated 10 months ago
- Geometric-Mean Policy Optimization☆95Updated 3 weeks ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆143Updated 6 months ago
- ☆29Updated last year
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆44Updated 10 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆95Updated last month
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Visual Embedding Distillation, arXiv 2024☆66Updated last month
- ☆35Updated 10 months ago
- My fork os allen AI's OLMo for educational purposes.☆30Updated last year
- FuseAI Project☆87Updated 10 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 6 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆148Updated last year
- ☆105Updated 6 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆197Updated this week
- GLM Series Edge Models☆154Updated 5 months ago
- ☆92Updated 6 months ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learning☆328Updated 6 months ago
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆245Updated 3 months ago
- ☆300Updated 6 months ago