scale-snu / mcsim_publicView external linksLinks
☆22Feb 26, 2023Updated 2 years ago
Alternatives and similar repositories for mcsim_public
Users that are interested in mcsim_public are comparing it to the libraries listed below
Sorting:
- ☆14Apr 18, 2024Updated last year
- Cheddar: A Swift Fully Homomorphic Encryption (FHE) GPU Library☆46Jan 14, 2026Updated last month
- ☆10Jan 11, 2024Updated 2 years ago
- ☆11Aug 23, 2023Updated 2 years ago
- Processing-In-Memory (PIM) Simulator☆222Dec 12, 2024Updated last year
- DRAM error-correction code (ECC) simulator incorporating statistical error properties and DRAM design characteristics for inferring pre-c…☆10Dec 7, 2023Updated 2 years ago
- ☆25Apr 1, 2025Updated 10 months ago
- BEER determines an ECC code's parity-check matrix based on the uncorrectable errors it can cause. BEER targets Hamming codes that are use…☆19Oct 9, 2020Updated 5 years ago
- PyTorchSim is a Comprehensive, Fast, and Accurate NPU Simulation Framework☆90Updated this week
- SoftMC is an experimental FPGA-based memory controller design that can be used to develop tests for DDR3 SODIMMs using a C++ based API. T…☆143Aug 24, 2023Updated 2 years ago
- Homomorphic Logistic Regression on Encrypted Data☆51Apr 5, 2019Updated 6 years ago
- The source code for GPGPUSim+Ramulator simulator. In this version, GPGPUSim uses Ramulator to simulate the DRAM. This simulator is used t…☆60Sep 30, 2019Updated 6 years ago
- Prefetching and efficient data path for memory disaggregation☆69Jul 16, 2020Updated 5 years ago
- Implementation of deep ResNet model on CKKS scheme in Microsoft SEAL library using multiplexed parallel convolution☆101Jul 27, 2022Updated 3 years ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆120Mar 6, 2024Updated last year
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆186Jan 8, 2026Updated last month
- Accelergy is an energy estimation infrastructure for accelerator energy estimations☆156May 26, 2025Updated 8 months ago
- TAPA compiles task-parallel HLS program into high-performance FPGA accelerators. UCLA-maintained.☆181Aug 16, 2025Updated 6 months ago
- Graph500 reference implementations☆181Mar 25, 2022Updated 3 years ago
- Intel Homomorphic Encryption Acceleration Library accelerates modular arithmetic operations used in homomorphic encryption by leveraging …☆254Jul 17, 2025Updated 6 months ago
- ☆317Nov 7, 2025Updated 3 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- Repository to host and maintain SCALE-Sim code☆412Feb 2, 2026Updated last week
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆407Jan 2, 2025Updated last year
- This is the top-level repository for the Accel-Sim framework.☆562Feb 7, 2026Updated last week
- An Agile RISC-V SoC Design Framework with in-order cores, out-of-order cores, accelerators, and more☆2,138Updated this week
- Model Compression Toolbox for Large Language Models and Diffusion Models☆753Aug 14, 2025Updated 6 months ago
- BaseJump STL: A Standard Template Library for SystemVerilog☆645Jan 19, 2026Updated 3 weeks ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆812Mar 6, 2025Updated 11 months ago
- Berkeley's Spatial Array Generator☆1,220Updated this week
- GPGPU-Sim provides a detailed simulation model of contemporary NVIDIA GPUs running CUDA and/or OpenCL workloads. It includes support for…☆1,574Feb 15, 2025Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,607Jul 12, 2024Updated last year
- The official repository for the gem5 computer-system architecture simulator.☆2,460Feb 7, 2026Updated last week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,705Jun 25, 2024Updated last year
- GNU toolchain for RISC-V, including GCC☆4,360Jan 23, 2026Updated 3 weeks ago
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆4,990Jan 18, 2026Updated 3 weeks ago
- Universal LLM Deployment Engine with ML Compilation☆22,039Updated this week
- Transformer related optimization, including BERT, GPT☆6,392Mar 27, 2024Updated last year
- Inference Llama 2 in one file of pure C☆19,162Aug 6, 2024Updated last year