SiriusNEO / NightWizardLinks
SJTU CS2951 Computer Architecture Course Project, A Verilog HDL implemented RISC-V CPU.
☆10Updated 4 years ago
Alternatives and similar repositories for NightWizard
Users that are interested in NightWizard are comparing it to the libraries listed below
Sorting:
- MS108 Course Project, SJTU ACM Class.☆32Updated 3 years ago
- ☆78Updated last year
- ☆12Updated 2 years ago
- ☆145Updated last month
- A Compiler from "Mx* language" (A C++ & Java like language) to RV32I Assembly, with optimizations on LLVM IR. SJTU CS2966 Project.☆12Updated 3 years ago
- A record of reading list on some MLsys popular topic☆21Updated 10 months ago
- WaferLLM: Large Language Model Inference at Wafer Scale☆88Updated last month
- GoPTX: Fine-grained GPU Kernel Fusion by PTX-level Instruction Flow Weaving☆19Updated 6 months ago
- ☆224Updated 3 months ago
- A RISC-V simulator☆38Updated 2 years ago
- ☆13Updated 3 years ago
- Github repository of HPCA 2025 paper "UniNDP: A Unified Compilation and Simulation Tool for Near DRAM Processing Architectures"☆19Updated 3 weeks ago
- ☆26Updated last year
- ☆24Updated 9 months ago
- ☆45Updated last year
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆53Updated last year
- H2-LLM: Hardware-Dataflow Co-Exploration for Heterogeneous Hybrid-Bonding-based Low-Batch LLM Inference☆87Updated 9 months ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆56Updated last year
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆36Updated last year
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆52Updated 6 months ago
- Artifact for paper "PIM is All You Need: A CXL-Enabled GPU-Free System for LLM Inference", ASPLOS 2025☆124Updated 9 months ago
- Open-source Framework for HPCA2024 paper: Gemini: Mapping and Architecture Co-exploration for Large-scale DNN Chiplet Accelerators☆108Updated 9 months ago
- ☆35Updated 2 years ago
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆108Updated last year
- ☆27Updated 6 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆174Updated last year
- The wafer-native AI accelerator simulation platform and inference engine.☆50Updated last month
- HyFiSS: A Hybrid Fidelity Stall-Aware Simulator for GPGPUs☆39Updated last year
- ☆35Updated last year
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Updated 2 months ago