torphix / infini-attention
Pytorch implementation of https://arxiv.org/html/2404.07143v1
☆20Updated last year
Alternatives and similar repositories for infini-attention
Users that are interested in infini-attention are comparing it to the libraries listed below
Sorting:
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆132Updated 11 months ago
- FuseAI Project☆86Updated 3 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 4 months ago
- ☆27Updated 3 months ago
- Open-Pandora: On-the-fly Control Video Generation☆34Updated 5 months ago
- This is the official repository for Inheritune.☆111Updated 3 months ago
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆79Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆143Updated 7 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆101Updated 11 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆100Updated 2 months ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆59Updated last year
- ☆17Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆84Updated 5 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 7 months ago
- ☆78Updated 4 months ago
- Masked Structural Growth for 2x Faster Language Model Pre-training☆23Updated last year
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆82Updated 6 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆101Updated 2 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated 3 weeks ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆42Updated 10 months ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 5 months ago
- mllm-npu: training multimodal large language models on Ascend NPUs☆90Updated 8 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆164Updated last week
- HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models☆43Updated 5 months ago
- ☆29Updated 8 months ago
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆34Updated 10 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory☆92Updated last week
- ☆73Updated last year