rejunity / tiny-asic-1_58bit-matrix-mul
Tiny ASIC implementation for "The Era of 1-bit LLMs All Large Language Models are in 1.58 Bits" matrix multiplication unit
☆127Updated 11 months ago
Alternatives and similar repositories for tiny-asic-1_58bit-matrix-mul:
Users that are interested in tiny-asic-1_58bit-matrix-mul are comparing it to the libraries listed below
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆104Updated 5 months ago
- ☆87Updated last year
- ☆112Updated this week
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 5 months ago
- 1.58-bit LLaMa model☆82Updated 11 months ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆211Updated 2 months ago
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆259Updated 5 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆272Updated last year
- The Riallto Open Source Project from AMD☆75Updated 4 months ago
- Train your own small bitnet model☆65Updated 5 months ago
- PB-LLM: Partially Binarized Large Language Models☆152Updated last year
- QuIP quantization☆52Updated last year
- GroqFlow provides an automated tool flow for compiling machine learning and linear algebra workloads into Groq programs and executing tho…☆108Updated 3 weeks ago
- Experimental BitNet Implementation☆61Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models☆36Updated last year
- Run 64-bit Linux on LiteX + RocketChip☆194Updated 7 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆196Updated 8 months ago
- Machine-Learning Accelerator System Exploration Tools☆153Updated this week
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆125Updated 3 months ago
- A new LLM solution for RTL code generation, achieving state-of-the-art performance in non-commercial solutions and outperforming GPT-3.5.☆173Updated last month
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆70Updated last month
- ☆79Updated 4 months ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated last year
- An AI accelerator implementation with Xilinx FPGA☆25Updated last month
- llama.cpp fork with additional SOTA quants and improved performance☆222Updated this week
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆138Updated last month
- Inference of Mamba models in pure C☆186Updated last year
- BitNet a4.8 Implementation in one file of pytorch☆13Updated 2 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆234Updated last month
- Boosting 4-bit inference kernels with 2:4 Sparsity☆71Updated 6 months ago