MILVLG / mlc-impLinks
Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
☆10Updated last year
Alternatives and similar repositories for mlc-imp
Users that are interested in mlc-imp are comparing it to the libraries listed below
Sorting:
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated 2 years ago
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆51Updated this week
- Official Pytorch Implementation of Self-emerging Token Labeling☆35Updated last year
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Updated last year
- Description and applications of OpenAI's paper about DALL-E (2021) and implementation of other (CLIP-guided) zero-shot text-to-image gene…☆33Updated 3 years ago
- ☆33Updated 2 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆96Updated 8 months ago
- EfficientSAM + YOLO World base model for use with Autodistill.☆10Updated last year
- Official PyTorch implementation of "LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging" (ICML 2024)☆30Updated last year
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆19Updated 8 months ago
- EdgeSAM model for use with Autodistill.☆27Updated last year
- Multi-Layer Key-Value sharing experiments on Pythia models☆33Updated last year
- Various test models in WNNX format. It can view with `pip install wnetron && wnetron`☆12Updated 3 years ago
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated last year
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆36Updated last year
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 6 months ago
- ☆22Updated 2 years ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 11 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- The official implementation of the paper "Reducing Fine-Tuning Memory Overhead by Approximate and Memory-Sharing Backpropagation"☆20Updated 8 months ago
- Lottery Ticket Adaptation☆39Updated 9 months ago
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆35Updated last year
- Official implementation of ECCV24 paper: POA☆24Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated last month
- KVTuner: Sensitivity-Aware Layer-wise Mixed Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference☆18Updated 3 months ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 11 months ago
- A faster implementation of OpenCV-CUDA that uses OpenCV objects, and more!☆51Updated last month
- Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta☆16Updated 9 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last month
- FlexAttention w/ FlashAttention3 Support☆27Updated 10 months ago