guanweifan / awesome-efficient-vlaLinks
A curated paper list and taxonomy of efficient Vision-Language-Action (VLA) models for embodied manipulation.
☆50Updated 2 weeks ago
Alternatives and similar repositories for awesome-efficient-vla
Users that are interested in awesome-efficient-vla are comparing it to the libraries listed below
Sorting:
- Deploying LLMs offline on the NVIDIA Jetson platform marks the dawn of a new era in embodied intelligence, where devices can function ind…☆103Updated last year
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆91Updated 10 months ago
- ☆61Updated last year
- 🔥This is a curated list of "A survey on Efficient Vision-Language Action Models" research. We will continue to maintain and update the r…☆83Updated 2 weeks ago
- A collection of VLMs papers, blogs, and projects, with a focus on VLMs in Autonomous Driving and related reasoning techniques.☆11Updated last year
- [ECCV 2024] AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer☆37Updated 11 months ago
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆133Updated last year
- Unleashing the Power of VLMs in Autonomous Driving via Reinforcement Learning and Reasoning☆298Updated 8 months ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆121Updated last year
- Adapting VLMs to Bench2Drive.☆166Updated last month
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆66Updated 8 months ago
- LightVLA☆62Updated 2 weeks ago
- Doe-1: Closed-Loop Autonomous Driving with Large World Model☆106Updated 10 months ago
- ☆464Updated last month
- 【IEEE T-IV】A systematic survey of multi-modal and multi-task visual understanding foundation models for driving scenarios☆50Updated last year