WesKwong / FLMMS
Federated Learning Multi-Machine Simulator: A Docker-based federated learning framework for simulating multi-machine training
☆9Updated 11 months ago
Alternatives and similar repositories for FLMMS:
Users that are interested in FLMMS are comparing it to the libraries listed below
- A tiny paper rating web☆33Updated 2 weeks ago
- 中科大2021秋《运筹学》课程资源☆8Updated 3 years ago
- a brief repo about paper research☆14Updated 6 months ago
- 📚 Collection of awesome generation acceleration resources.☆173Updated this week
- ☆18Updated last month
- classification and solutions for PKU-CSSummerCamp-OnlineJudge☆14Updated last year
- USTC-Computer Science-Resources☆45Updated 3 years ago
- A sparse attention kernel supporting mix sparse patterns☆161Updated last month
- A modular framework designed for easy experimentation with optimization-based diffusion sampling algorithms.☆14Updated 2 months ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆146Updated 5 months ago
- This is the official Python version of CoreInfer: Accelerating Large Language Model Inference with Semantics-Inspired Adaptive Sparse Act…☆15Updated 4 months ago
- 数字图像处理与分析 2022春 万寿红 课程作业 & 实验☆15Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆76Updated last month
- Efficient 2:4 sparse training algorithms and implementations☆51Updated 3 months ago
- 中国科学技术大学大数据算法课程笔记2023☆32Updated last year
- The repository has collected a batch of noteworthy MLSys bloggers (Algorithms/Systems)☆199Updated 2 months ago
- 📜 Paper list on decoding methods for LLMs and LVLMs☆31Updated 2 months ago
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference☆66Updated last month
- [EMNLP 2024 Findings🔥] Official implementation of ": D2O: Dynamic Discriminative Operations for Efficient Long-Context Inference of Larg…☆92Updated 4 months ago
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆231Updated last week
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆91Updated 8 months ago
- 中科大计算机学院部分课程的试卷☆63Updated last month
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆339Updated this week
- Curation of resources for LLM research, screened by @tongyx361 to ensure high quality and accompanied with elaborately-written concise de…☆45Updated 8 months ago
- Official implementation for Yuan & Liu & Zhong et al., KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark o…☆67Updated 2 weeks ago
- ☆22Updated 5 months ago
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆75Updated 3 weeks ago