lamm-mit / Cephalo-Phi-3-Vision-MoELinks
☆14Updated last year
Alternatives and similar repositories for Cephalo-Phi-3-Vision-MoE
Users that are interested in Cephalo-Phi-3-Vision-MoE are comparing it to the libraries listed below
Sorting:
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆96Updated 9 months ago
- ☆29Updated 3 months ago
- DPO, but faster 🚀☆45Updated 10 months ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆82Updated 2 months ago
- Train, tune, and infer Bamba model☆133Updated 4 months ago
- Linear Attention Sequence Parallelism (LASP)☆87Updated last year
- ☆77Updated last month
- [EMNLP 2024] Official PyTorch implementation code for realizing the technical part of Traversal of Layers (TroL) presenting new propagati…☆98Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated this week
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆51Updated last month
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆161Updated 6 months ago
- a family of highly capabale yet efficient large multimodal models☆191Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆90Updated 4 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆211Updated 9 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆99Updated last year
- ☆97Updated last year
- ☆201Updated 10 months ago
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆62Updated 3 weeks ago
- ☆69Updated last year
- Google TPU optimizations for transformers models☆120Updated 8 months ago
- My fork os allen AI's OLMo for educational purposes.☆30Updated 10 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆53Updated 6 months ago
- ☆74Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆49Updated last year
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated this week
- Megatron's multi-modal data loader☆249Updated last week
- PyTorch implementation of models from the Zamba2 series.☆185Updated 8 months ago
- MatFormer repo☆62Updated 10 months ago
- A repository for research on medium sized language models.☆78Updated last year