torphix / infini-attentionLinks
Pytorch implementation of https://arxiv.org/html/2404.07143v1
☆21Updated last year
Alternatives and similar repositories for infini-attention
Users that are interested in infini-attention are comparing it to the libraries listed below
Sorting:
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆135Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 6 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆44Updated last year
- ☆73Updated last year
- Scaling Preference Data Curation via Human-AI Synergy☆69Updated last week
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆34Updated last year
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆137Updated last month
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆85Updated 8 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆110Updated last month
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆101Updated last year
- Open-Pandora: On-the-fly Control Video Generation☆34Updated 7 months ago
- FuseAI Project☆87Updated 5 months ago
- Mixture-of-Experts (MoE) Language Model☆189Updated 10 months ago
- mllm-npu: training multimodal large language models on Ascend NPUs☆90Updated 10 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆206Updated 6 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆144Updated 9 months ago
- ☆77Updated 3 months ago
- ☆56Updated 3 weeks ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆125Updated 8 months ago
- ☆33Updated 5 months ago
- Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation☆140Updated 8 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆85Updated 7 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆105Updated last month
- ☆94Updated 7 months ago
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆46Updated 4 months ago
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆45Updated 5 months ago
- ☆173Updated 5 months ago
- GLM Series Edge Models☆144Updated last month
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆222Updated 2 months ago