ysyisyourbrother / Galaxy-LMLinks
Work in progress LLM framework.
☆13Updated 7 months ago
Alternatives and similar repositories for Galaxy-LM
Users that are interested in Galaxy-LM are comparing it to the libraries listed below
Sorting:
- ☆201Updated last year
- PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices☆33Updated last year
- A curated list of awesome projects and papers for AI on Mobile/IoT/Edge devices. Everything is continuously updating. Welcome contributio…☆38Updated last year
- InFi is a library for building input filters for resource-efficient inference.☆38Updated last year
- ☆99Updated last year
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆25Updated 4 years ago
- This is a list of awesome edgeAI inference related papers.☆96Updated last year
- ☆16Updated last year
- ☆14Updated last year
- Survey Paper List - Efficient LLM and Foundation Models☆248Updated 8 months ago
- Official Repo for "LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization"☆32Updated last year
- This repository is established to store personal notes and annotated papers during daily research.☆125Updated this week
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 2 years ago
- ☆43Updated 11 months ago
- One-size-fits-all model for mobile AI, a novel paradigm for mobile AI in which the OS and hardware co-manage a foundation model that is c…☆28Updated last year
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆46Updated 6 months ago
- MobiSys#114☆21Updated last year
- Summary of some awesome work for optimizing LLM inference☆73Updated this week
- ☆21Updated last year
- ☆43Updated 2 weeks ago
- "Efficient Federated Learning for Modern NLP", to appear at MobiCom 2023.☆34Updated last year
- ☆14Updated 9 months ago
- a curated list of high-quality papers on resource-efficient LLMs 🌱☆123Updated 2 months ago
- Source code and datasets for Ekya, a system for continuous learning on the edge.☆105Updated 3 years ago
- Curated collection of papers in MoE model inference☆191Updated 3 months ago
- ☆21Updated last year
- THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression☆19Updated 10 months ago
- Source code for Jellyfish, a soft real-time inference serving system☆13Updated 2 years ago
- ☆31Updated last year
- Miro[ACM MobiCom '23] Cost-effective On-device Continual Learning over Memory Hierarchy with Miro☆15Updated last year