SiriusNEO / NightWizardLinks
SJTU CS2951 Computer Architecture Course Project, A Verilog HDL implemented RISC-V CPU.
☆10Updated 3 years ago
Alternatives and similar repositories for NightWizard
Users that are interested in NightWizard are comparing it to the libraries listed below
Sorting:
- MS108 Course Project, SJTU ACM Class.☆31Updated 2 years ago
- A Compiler from "Mx* language" (A C++ & Java like language) to RV32I Assembly, with optimizations on LLVM IR. SJTU CS2966 Project.☆11Updated 2 years ago
- ☆78Updated 11 months ago
- ☆13Updated last year
- ☆117Updated last week
- Data Structure 2022 homework.☆1Updated 3 years ago
- YPU is a part of RISCV pipelined CPU for demo-use☆5Updated 4 years ago
- Github repository of HPCA 2025 paper "UniNDP: A Unified Compilation and Simulation Tool for Near DRAM Processing Architectures"☆13Updated 8 months ago
- ☆172Updated last year
- SJTU ACM Class Architecture 2021 Assignment☆7Updated 3 years ago
- A record of reading list on some MLsys popular topic☆11Updated 4 months ago
- ☆23Updated last year
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆88Updated last year
- ☆76Updated last year
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆46Updated 2 weeks ago
- WaferLLM: Large Language Model Inference at Wafer Scale☆30Updated 3 weeks ago
- Open-source Framework for HPCA2024 paper: Gemini: Mapping and Architecture Co-exploration for Large-scale DNN Chiplet Accelerators☆85Updated 3 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆147Updated last year
- LLM serving cluster simulator☆108Updated last year
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆58Updated 8 months ago
- Large Language Model (LLM) Serving Paper and Resource List☆24Updated 2 months ago
- ☆42Updated this week
- Artifact for paper "PIM is All You Need: A CXL-Enabled GPU-Free System for LLM Inference", ASPLOS 2025☆82Updated 3 months ago
- GoPTX: Fine-grained GPU Kernel Fusion by PTX-level Instruction Flow Weaving☆17Updated this week
- ☆49Updated last year
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆42Updated 7 months ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆53Updated last year
- Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Award, Honorable Mention]☆15Updated 4 months ago
- ☆146Updated 6 months ago
- ☆74Updated 3 years ago