rejunity / tiny-asic-1_58bit-matrix-mulLinks
Tiny ASIC implementation for "The Era of 1-bit LLMs All Large Language Models are in 1.58 Bits" matrix multiplication unit
☆172Updated last year
Alternatives and similar repositories for tiny-asic-1_58bit-matrix-mul
Users that are interested in tiny-asic-1_58bit-matrix-mul are comparing it to the libraries listed below
Sorting:
- ☆117Updated last year
- Machine-Learning Accelerator System Exploration Tools☆186Updated this week
- The Riallto Open Source Project from AMD☆83Updated 8 months ago
- An AI accelerator implementation with Xilinx FPGA☆79Updated 11 months ago
- A high-efficiency system-on-chip for floating-point compute workloads.☆44Updated 11 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆111Updated last year
- Ocelot: The Berkeley Out-of-Order Machine With V-EXT support☆206Updated 3 weeks ago
- DNN Compiler for Heterogeneous SoCs☆59Updated 2 weeks ago
- A survey on Hardware Accelerated LLMs☆61Updated 11 months ago
- ☆36Updated 2 years ago
- Run 64-bit Linux on LiteX + RocketChip☆208Updated 2 months ago
- A new LLM solution for RTL code generation, achieving state-of-the-art performance in non-commercial solutions and outperforming GPT-3.5.☆243Updated 11 months ago
- Universal Memory Interface (UMI)☆156Updated 2 weeks ago
- Open source machine learning accelerators☆394Updated last year
- Verilog evaluation benchmark for large language model☆361Updated 5 months ago
- Verilog package manager written in Rust☆143Updated last year
- Opensource software/hardware platform to build edge AI solutions deployed on FPGA or custom ASIC hardware.☆282Updated this week
- a mini 2x2 systolic array and PE demo☆66Updated 2 weeks ago
- Research and Materials on Hardware implementation of Transformer Model☆295Updated 10 months ago
- Attention in SRAM on Tenstorrent Grayskull☆40Updated last year
- Fully opensource spiking neural network accelerator☆163Updated 2 years ago
- A minimal Tensor Processing Unit (TPU) inspired by Google's TPUv1.☆192Updated last year
- ☆306Updated this week
- Spatz is a compact RISC-V-based vector processor meant for high-performance, small computing clusters.☆134Updated last week
- Self checking RISC-V directed tests☆118Updated 7 months ago
- Inference RWKV v7 in pure C.☆43Updated 2 months ago
- ☆233Updated last year
- An Open Workflow to Build Custom SoCs and run Deep Models at the Edge☆101Updated 3 weeks ago
- Floating point modules for CHISEL☆32Updated 11 years ago
- FREE TPU V3plus for FPGA is the free version of a commercial AI processor (EEP-TPU) for Deep Learning EDGE Inference☆168Updated 2 years ago