rejunity / tiny-asic-1_58bit-matrix-mulLinks
Tiny ASIC implementation for "The Era of 1-bit LLMs All Large Language Models are in 1.58 Bits" matrix multiplication unit
☆169Updated last year
Alternatives and similar repositories for tiny-asic-1_58bit-matrix-mul
Users that are interested in tiny-asic-1_58bit-matrix-mul are comparing it to the libraries listed below
Sorting:
- ☆111Updated last year
- Machine-Learning Accelerator System Exploration Tools☆183Updated 3 weeks ago
- An AI accelerator implementation with Xilinx FPGA☆71Updated 9 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆111Updated last year
- The Riallto Open Source Project from AMD☆84Updated 7 months ago
- A new LLM solution for RTL code generation, achieving state-of-the-art performance in non-commercial solutions and outperforming GPT-3.5.☆238Updated 9 months ago
- ☆34Updated last year
- A survey on Hardware Accelerated LLMs☆59Updated 10 months ago
- DNN Compiler for Heterogeneous SoCs☆53Updated last week
- Torch2Chip (MLSys, 2024)☆54Updated 7 months ago
- First Open-Source Industry-Specific Model for Semiconductors☆378Updated 7 months ago
- A high-efficiency system-on-chip for floating-point compute workloads.☆43Updated 10 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated last year
- Research and Materials on Hardware implementation of Transformer Model☆287Updated 8 months ago
- Verilog evaluation benchmark for large language model☆342Updated 4 months ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆227Updated 10 months ago
- Samples of good AI generated CUDA kernels☆91Updated 5 months ago
- ☆52Updated 2 months ago
- Fully opensource spiking neural network accelerator☆162Updated 2 years ago
- Experimental BitNet Implementation☆73Updated 4 months ago
- Attention in SRAM on Tenstorrent Grayskull☆38Updated last year
- Inference RWKV v7 in pure C.☆42Updated last month
- An Open Workflow to Build Custom SoCs and run Deep Models at the Edge☆97Updated 3 weeks ago
- ☆57Updated 7 months ago
- ☆154Updated 5 months ago
- QuIP quantization☆61Updated last year
- Run 64-bit Linux on LiteX + RocketChip☆204Updated last month
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆114Updated last year
- A minimal Tensor Processing Unit (TPU) inspired by Google's TPUv1.☆188Updated last year
- ☆38Updated 8 months ago