FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation
☆51Aug 24, 2025Updated 7 months ago
Alternatives and similar repositories for FBI-LLM
Users that are interested in FBI-LLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆39Sep 12, 2024Updated last year
- Information Bottleneck in DNN with PyTorch☆15Jul 6, 2023Updated 2 years ago
- ☆120Mar 18, 2026Updated 3 weeks ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆30Apr 7, 2026Updated last week
- Official Pytorch Implementation of Paper "DarwinLM: Evolutionary Structured Pruning of Large Language Models"☆20Feb 21, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- ☆12Nov 17, 2023Updated 2 years ago
- ☆16Dec 9, 2023Updated 2 years ago
- CASS: Nvidia to AMD Transpilation with Data, Models, and Benchmark☆34Apr 9, 2026Updated last week
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆336Updated this week
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- ☆20Mar 6, 2022Updated 4 years ago
- Official PyTorch implementation of CD-MOE☆12Mar 18, 2026Updated 3 weeks ago
- ☆35Dec 22, 2025Updated 3 months ago
- Structured Binary Neural Networks for Image Recognition☆18Nov 18, 2021Updated 4 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- The predecessor of CiteLab.☆18Feb 3, 2026Updated 2 months ago
- ☆53Jul 18, 2024Updated last year
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Jul 18, 2024Updated last year
- [NIPS 2025] Mixing Expert Knowledge: Bring Human Thoughts Back to The Game of Go. Our model is originally named InternThinker-Go, and cal…☆25Jan 26, 2026Updated 2 months ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆56Mar 4, 2024Updated 2 years ago
- [CVPRW 21] "BNN - BN = ? Training Binary Neural Networks without Batch Normalization", Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu…☆57Dec 30, 2021Updated 4 years ago
- ☆14Jun 4, 2024Updated last year
- The official implementation of BiViT: Extremely Compressed Binary Vision Transformers☆16Jun 18, 2023Updated 2 years ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆229Jan 11, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- (ICLR 2025) BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models☆26Oct 4, 2024Updated last year
- ☆21Mar 7, 2024Updated 2 years ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆33May 9, 2024Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Jul 4, 2025Updated 9 months ago
- PB-LLM: Partially Binarized Large Language Models☆155Nov 20, 2023Updated 2 years ago
- An implementation of LazyLLM token pruning for LLaMa 2 model family.☆13Jan 6, 2025Updated last year
- ☆49May 20, 2025Updated 10 months ago
- ☆12Aug 22, 2023Updated 2 years ago
- ViTALiTy (HPCA'23) Code Repository☆23Mar 13, 2023Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Pragmatic approach to parsing import profiles for CI's☆12Jul 1, 2024Updated last year
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆31Jul 4, 2024Updated last year
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆37Sep 30, 2025Updated 6 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆68Mar 27, 2025Updated last year
- GoldenEye is a functional simulator with fault injection capabilities for common and emerging numerical formats, implemented for the PyTo…☆27Oct 22, 2024Updated last year
- World's Smallest Vision-Language Model☆33Apr 7, 2024Updated 2 years ago
- ☆25Oct 31, 2024Updated last year