luckyfan-cs / Template-of-HKUST-GZ-ThesisLinks
☆49Updated 11 months ago
Alternatives and similar repositories for Template-of-HKUST-GZ-Thesis
Users that are interested in Template-of-HKUST-GZ-Thesis are comparing it to the libraries listed below
Sorting:
- ☆26Updated last year
- ☆46Updated 8 months ago
- ☆28Updated last year
- ☆31Updated 11 months ago
- The official implementation of WSDM'24 paper <DeSCo: Towards Generalizable and Scalable Deep Subgraph Counting>☆22Updated last year
- ☆60Updated 2 months ago
- H2-LLM: Hardware-Dataflow Co-Exploration for Heterogeneous Hybrid-Bonding-based Low-Batch LLM Inference☆19Updated 2 months ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆94Updated 10 months ago
- Official code implementation for 2025 ICLR accepted paper "Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"☆34Updated 3 months ago
- ☆14Updated 9 months ago
- HKUST Thesis LaTeX3 Template (Available on Overleaf/TeXPage)☆88Updated last month
- analyse problems of AI with Math and Code☆17Updated 2 weeks ago
- ☆41Updated 6 months ago
- HISIM introduces a suite of analytical models at the system level to speed up performance prediction for AI models, covering logic-on-log…☆38Updated 3 months ago
- Facilitating selective network routing for Ivanti-connected devices to a school's network, using port forwarding for enhanced access cont…☆13Updated last year
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆28Updated last year
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆46Updated 4 months ago
- A record of coursework in AI Computing Systems, mainly focusing on high performance computing development for MLU.☆14Updated 2 years ago
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆108Updated 2 years ago
- OriGen: Enhancing RTL Code Generation with Code-to-Code Augmentation and Self-Reflection(ICCAD 2024)☆19Updated 8 months ago
- "Knock, knock!" "Who's there?" "Dobi."☆16Updated 3 months ago
- The official code for DATE'23 paper <CLAP: Locality Aware and Parallel Triangle Counting with Content Addressable Memory>☆23Updated this week
- ChatEDA: A Large Language Model Powered Autonomous Agent for EDA☆26Updated last month
- This is the source code of our ICML25 paper, titled "Accelerating Large Language Model Reasoning via Speculative Search".☆20Updated 3 weeks ago
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆105Updated 11 months ago
- ☆21Updated last year
- CASS: Nvidia to AMD Transpilation with Data, Models, and Benchmark☆20Updated 3 weeks ago
- ☆31Updated last year
- Simple Python interface for ABC☆23Updated 2 years ago
- [DAC 2024] EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive La…☆57Updated 11 months ago