Unparalleled-Calvin / Fudan-course-searchLinks
☆10Updated 4 years ago
Alternatives and similar repositories for Fudan-course-search
Users that are interested in Fudan-course-search are comparing it to the libraries listed below
Sorting:
- ICS_2020_PJ☆10Updated 4 years ago
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆106Updated 4 months ago
- A collection of papers on discrete diffusion models☆164Updated 3 months ago
- Course Website for ICS Spring 2020 at Fudan University https://sunfloweraries.github.io/ICS-Spring20-Fudan/☆12Updated 5 years ago
- A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention☆189Updated last month
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆32Updated 10 months ago
- [ICLR 2025] DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference☆38Updated 4 months ago
- Guide for surviving at UIUC (under development)☆76Updated 2 months ago
- ☆20Updated 4 months ago
- ☆78Updated last year
- [TMLR 2025] Efficient Diffusion Models: A Survey☆113Updated 4 months ago
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆163Updated 3 weeks ago
- SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse–Linear Attention☆78Updated this week
- ☆17Updated last year
- Welcome to the 'In Context Learning Theory' Reading Group☆30Updated 11 months ago
- [NeurIPS 2024] Source code for our paper "Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models".☆13Updated 3 months ago
- 陈云霁 智能计算系统 课后实验 一键运行☆40Updated 4 years ago
- 北航“冯如杯”论文模板 (2022年)☆11Updated 3 years ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆164Updated last month
- ☆19Updated 2 years ago
- [NeurIPS 2025] ScaleKV: Memory-Efficient Visual Autoregressive Modeling with Scale-Aware KV Cache Compression☆49Updated 4 months ago
- A lightweight Inference Engine built for block diffusion models☆30Updated last week
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆66Updated last year
- [ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆111Updated 3 months ago
- ☆10Updated 3 weeks ago
- Efficient 2:4 sparse training algorithms and implementations☆56Updated 10 months ago
- Course notes for Cyber Security (THUCST 2023 Spring)☆29Updated 2 years ago
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆29Updated 6 months ago
- A Collection of Papers on Diffusion Language Models☆132Updated last month
- The most open diffusion language model for code generation — releasing pretraining, evaluation, inference, and checkpoints.☆330Updated last week