☆19Dec 5, 2024Updated last year
Alternatives and similar repositories for LOZO
Users that are interested in LOZO are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ViTALiTy (HPCA'23) Code Repository☆23Mar 13, 2023Updated 3 years ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- Fine-tuning Quantized Neural Networks with Zeroth-order Optimization☆18Sep 17, 2025Updated 6 months ago
- Pytorch code for paper QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models☆25Sep 27, 2023Updated 2 years ago
- Implementation of Microscaling data formats in SystemVerilog.☆32Jul 6, 2025Updated 9 months ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- A implement of run-length encoding for Pytorch tensor using CUDA☆14Apr 7, 2021Updated 5 years ago
- [ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆126Jul 6, 2025Updated 9 months ago
- [EMNLP 24] Source code for paper 'AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for Memory-Efficient Large Language Models Fine-Tu…☆13Dec 15, 2024Updated last year
- ☆35Dec 22, 2025Updated 3 months ago
- [ICML 2024] Sparse Model Inversion: Efficient Inversion of Vision Transformers with Less Hallucination☆14Apr 29, 2025Updated 11 months ago
- ☆16Mar 8, 2024Updated 2 years ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆20Feb 16, 2024Updated 2 years ago
- ☆11Apr 5, 2023Updated 3 years ago
- Official implementation of ICML'24 paper "LQER: Low-Rank Quantization Error Reconstruction for LLMs"☆19Jul 11, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- SMART introduces a novel test-time framework where Small Language Models (SLMs) reason step-by-step, and Large Language Models (LLMs) pro…☆11Jul 9, 2025Updated 9 months ago
- [CVPR 2025] QuartDepth☆17Mar 24, 2025Updated last year
- ☆15Apr 6, 2026Updated last week
- ☆70Mar 2, 2026Updated last month
- LLMA = LLM + Arithmetic coder, which use LLM to do insane text data compression. LLMA=大模型+算术编码,它能使用LLM对文本数据进行暴力的压缩,达到极高的压缩率。☆22Nov 24, 2024Updated last year
- Source code of our TNNLS paper "Boosting Convolutional Neural Networks with Middle Spectrum Grouped Convolution"☆12Apr 14, 2023Updated 3 years ago
- Provides the code for the paper "EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators" by Luk…☆19Oct 6, 2019Updated 6 years ago
- ☆16Dec 9, 2023Updated 2 years ago
- ☆14Jul 14, 2025Updated 9 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [ICML 2025] Official PyTorch implementation of "NegMerge: Sign-Consensual Weight Merging for Machine Unlearning"☆14Nov 25, 2025Updated 4 months ago
- ☆15Apr 11, 2024Updated 2 years ago
- The official repository of Quamba1 [ICLR 2025] & Quamba2 [ICML 2025]☆67Jun 19, 2025Updated 9 months ago
- ☆10Apr 24, 2024Updated last year
- ☆28Mar 10, 2026Updated last month
- ☆17Mar 10, 2025Updated last year
- ☆13Jul 25, 2024Updated last year
- The implementation for FREE-Merging: Fourier Transform for Model Merging with Lightweight Experts (ICCV25)☆14Jun 26, 2025Updated 9 months ago
- The official code for "Advancing Multimodal Large Language Models with Quantization-Aware Scale Learning for Efficient Adaptation" | [MM2…☆14Dec 7, 2024Updated last year
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆15Aug 19, 2024Updated last year
- ☆12Jul 30, 2025Updated 8 months ago
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- Fork of Flame repo for training of some new stuff in development☆19Updated this week
- This is a repository for logarithmic Functional Units☆18Jan 12, 2026Updated 3 months ago
- [ICCAD 2025] Squant☆15Jul 3, 2025Updated 9 months ago
- RepGhostNetV2: When RepGhost meets MobileNetV4☆16May 29, 2024Updated last year