torphix / infini-attention
Pytorch implementation of https://arxiv.org/html/2404.07143v1
☆20Updated 11 months ago
Alternatives and similar repositories for infini-attention:
Users that are interested in infini-attention are comparing it to the libraries listed below
- ☆17Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆140Updated 5 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆40Updated 8 months ago
- Open-Pandora: On-the-fly Control Video Generation☆32Updated 3 months ago
- Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b Mo…☆26Updated 8 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆74Updated 4 months ago
- FuseAI Project☆83Updated last month
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆128Updated 9 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated last year
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆30Updated 8 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆149Updated 2 months ago
- ☆73Updated last year
- Enable Next-sentence Prediction for Large Language Models with Faster Speed, Higher Accuracy and Longer Context☆27Updated 6 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆81Updated 3 months ago
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆78Updated last year
- ☆30Updated last month
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 2 months ago
- ☆92Updated 3 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆95Updated 2 weeks ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆198Updated 2 months ago
- ☆44Updated last week
- ☆28Updated 6 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated last month
- This is the official repository for Inheritune.☆109Updated last month
- Our 2nd-gen LMM☆33Updated 9 months ago
- ☆75Updated last month
- A repository for research on medium sized language models.☆77Updated 9 months ago
- mllm-npu: training multimodal large language models on Ascend NPUs☆91Updated 6 months ago