WesKwong / FLMMS
Federated Learning Multi-Machine Simulator: A Docker-based federated learning framework for simulating multi-machine training
☆9Updated 9 months ago
Alternatives and similar repositories for FLMMS:
Users that are interested in FLMMS are comparing it to the libraries listed below
- a brief repo about paper research☆14Updated 4 months ago
- A tiny paper rating web☆28Updated this week
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆197Updated last month
- ☆16Updated last month
- A comprehensive guide for beginners in the field of data management and artificial intelligence.☆142Updated 2 months ago
- ☆21Updated 3 months ago
- ☆37Updated this week
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference☆56Updated last week
- Course Material for the UG Course COMP4901Y☆52Updated 8 months ago
- This is the official Python version of CoreInfer: Accelerating Large Language Model Inference with Semantics-Inspired Adaptive Sparse Act…☆15Updated 3 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆34Updated 2 months ago
- ☆34Updated 2 months ago
- [EMNLP 2024 Findings🔥] Official implementation of "LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Infe…☆89Updated 2 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆276Updated 2 weeks ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆138Updated 3 months ago
- ☆36Updated 2 months ago
- A Brief Review for Computer Architecture☆19Updated last year
- Source code for the paper "LongGenBench: Long-context Generation Benchmark"☆14Updated 3 months ago
- ☆45Updated 3 weeks ago
- 中国科学技术大学计算机学院课程资源(https://mbinary.xyz/ustc-cs/)☆18Updated 5 years ago
- 中科大计算机学院部分课程的试卷☆63Updated 3 weeks ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆157Updated last month
- Accelerating Diffusion Transformers with Token-wise Feature Caching☆49Updated this week
- ☆85Updated 3 years ago
- Sharing my research toolchain☆82Updated last year
- Code release for VTW (AAAI 2025) Oral☆29Updated last week
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆83Updated 7 months ago
- 中国科学技术大学大数据算法课程笔记2023☆31Updated last year
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆70Updated 7 months ago
- USTC-Computer Science-Resources☆44Updated 3 years ago