Unparalleled-Calvin / Fudan-course-searchLinks
☆10Updated 4 years ago
Alternatives and similar repositories for Fudan-course-search
Users that are interested in Fudan-course-search are comparing it to the libraries listed below
Sorting:
- ICS_2020_PJ☆10Updated 4 years ago
- THU 编译原理课实验 -- 删减版 C 语言编译器(report 中思考题部分有错误)☆1Updated 3 years ago
- ☆10Updated 4 months ago
- ☆88Updated last month
- ☆13Updated last year
- iOS Frontend of SJTU Anonymous Forum Wukefenggao☆20Updated 4 years ago
- ☆78Updated 11 months ago
- ⚡ Bring some magic to i.sjtu.edu.cn☆20Updated 5 years ago
- Course Website for ICS Spring 2020 at Fudan University https://sunfloweraries.github.io/ICS-Spring20-Fudan/☆12Updated 5 years ago
- [ICLR 2025] DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference☆29Updated last month
- A happy way for research!☆23Updated 2 years ago
- ☆26Updated last month
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆31Updated 7 months ago
- 日常事务 LaTeX 懒人包☆35Updated 2 years ago
- 陈云霁 智能计算系统 课后实验 一键运行☆40Updated 4 years ago
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity☆45Updated 2 weeks ago
- 清华大学飞跃数据库☆27Updated this week
- 未名树洞 的Golang后端☆22Updated 4 years ago
- Paper survey of efficient computation for large scale models.☆34Updated 7 months ago
- 北航“冯如杯”论文模板 (2022年)☆11Updated 3 years ago
- Course notes for Cyber Security (THUCST 2023 Spring)☆30Updated 2 years ago
- 新燕园人的私人班车助手(非官方)。☆60Updated 5 months ago
- 清华大学计算机系《数据库系统概论》2022 年大作业项目 DBMS,支持基础 SQL 的解析和执行。☆11Updated 2 years ago
- Efficient 2:4 sparse training algorithms and implementations☆55Updated 7 months ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆22Updated last year
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆54Updated 10 months ago
- Openreviewers: Multi Agent Academic Review Simulation System☆20Updated last year
- ☆11Updated 4 years ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆118Updated last week
- A sparse attention kernel supporting mix sparse patterns☆256Updated 5 months ago