This is a collection of our research on efficient AI, covering hardware-aware NAS and model compression.
☆86Oct 25, 2024Updated last year
Alternatives and similar repositories for Moonlit
Users that are interested in Moonlit are comparing it to the libraries listed below
Sorting:
- Multi-branch model for concurrent execution☆18Jun 27, 2023Updated 2 years ago
- ☆40Nov 22, 2025Updated 3 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 3 months ago
- Code for ICML 2022 paper "SPDY: Accurate Pruning with Speedup Guarantees"☆20May 3, 2023Updated 2 years ago
- ☆25Oct 31, 2024Updated last year
- Code for reproducing the results from "CrAM: A Compression-Aware Minimizer" accepted at ICLR 2023☆10Mar 1, 2023Updated 2 years ago
- Implementation of PGONAS for CVPR22W and RD-NAS for ICASSP23☆23Apr 25, 2023Updated 2 years ago
- Flexible simulator for mixed precision and format simulation of LLMs and vision transformers.☆51Jul 10, 2023Updated 2 years ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆46Jun 4, 2024Updated last year
- ☆14Jun 22, 2022Updated 3 years ago
- 微信Ipad协议golang版本,基于grpc的实现策略。这套代码需要通过gprc服务端组包解包才可以正常使用☆12Jul 8, 2019Updated 6 years ago
- The official code for "Advancing Multimodal Large Language Models with Quantization-Aware Scale Learning for Efficient Adaptation" | [MM2…☆14Dec 7, 2024Updated last year
- The goal of this design is to use the PYNQ-Z2 development board to design a general convolution neural network accelerator. And through r…☆11Sep 30, 2020Updated 5 years ago
- code for NASViT☆67Apr 25, 2022Updated 3 years ago
- A distributed in-memory store for temporal knowledge graphs☆10Mar 20, 2024Updated last year
- Generative Models for Image Captioning☆10Jun 7, 2017Updated 8 years ago
- ☆12Jun 29, 2024Updated last year
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated last year
- Stencil with Optimized Dataflow Architecture☆12Feb 27, 2024Updated 2 years ago
- You Only Search Once: On Lightweight Differentiable Architecture Search for Resource-Constrained Embedded Platforms☆12Apr 17, 2023Updated 2 years ago
- Official implementation of the ICLR 2024 paper AffineQuant☆28Mar 30, 2024Updated last year
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆28Dec 6, 2023Updated 2 years ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆98Nov 25, 2024Updated last year
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Mar 19, 2023Updated 2 years ago
- ☆41Mar 28, 2024Updated last year
- Simple task offloading client to offload HTTP requests to edge servers☆10Dec 8, 2022Updated 3 years ago
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Jul 18, 2024Updated last year
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, T…☆33Jan 20, 2022Updated 4 years ago
- "How to Implement YOLO v3 Object Detector from Scratch" inference源码/ 逐行中文注释☆11Oct 31, 2018Updated 7 years ago
- 一门公开课《MIT6.824》的大作业☆12Jun 21, 2021Updated 4 years ago
- [ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization☆14Nov 27, 2024Updated last year
- ☆15Mar 21, 2025Updated 11 months ago
- Toolkit for 3DGS compression research☆27Jan 6, 2026Updated last month
- ☆15Sep 24, 2023Updated 2 years ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆30Mar 28, 2024Updated last year
- [ICASSP'22] Integer-only Zero-shot Quantization for Efficient Speech Recognition☆34Oct 11, 2021Updated 4 years ago
- ☆10Dec 9, 2021Updated 4 years ago
- Open Source Projects from Pallas Lab☆20Oct 10, 2021Updated 4 years ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Apr 15, 2024Updated last year