Compression schema for gradients of activations in backward pass
☆45Jul 26, 2023Updated 2 years ago
Alternatives and similar repositories for fewbit
Users that are interested in fewbit are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Quadrature-based features for kernel approximation☆16Oct 30, 2018Updated 7 years ago
- Faster and Lighter LoRA Implementations☆13Nov 21, 2024Updated last year
- Gradient-free optimization method for the multidimensional arrays and discretized multivariate functions based on the tensor train (TT) f…☆40Apr 29, 2025Updated 11 months ago
- Models and code for the ICLR 2020 workshop paper "Towards Understanding Normalization in Neural ODEs"☆16Apr 27, 2020Updated 5 years ago
- Python bindings to llama.cpp☆27Mar 22, 2023Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- MUSCO: MUlti-Stage COmpression of neural networks☆72Feb 16, 2021Updated 5 years ago
- Gradient-free optimization method for multivariable functions based on the low rank tensor train (TT) format and maximal-volume principle…☆39Jul 27, 2023Updated 2 years ago
- Telegram notification with IPython magics.☆61Jul 10, 2023Updated 2 years ago
- First Latency-Aware Competitive LLM Agent Benchmark☆26Jun 3, 2025Updated 10 months ago
- Code for the paper "Faster Neural Network Training with Approximate Tensor Operations"☆10Oct 23, 2021Updated 4 years ago
- Code for "Exponential Family Estimation via Adversarial Dynamics Embedding" (NeurIPS 2019)☆14Nov 26, 2019Updated 6 years ago
- Supplementary code for the paper "Meta-Solver for Neural Ordinary Differential Equations" https://arxiv.org/abs/2103.08561☆25Mar 30, 2021Updated 5 years ago
- The Hierarchical Intrinsically Motivated Agent (HIMA) is an algorithm that is intended to exhibit an adaptive goal-directed behavior usin…☆37Oct 7, 2025Updated 6 months ago
- Skoltech 2017 NLA course☆36Oct 17, 2018Updated 7 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆16Dec 9, 2023Updated 2 years ago
- some mixture of experts architecture implementations☆27Mar 22, 2024Updated 2 years ago
- Learning Accurate Decision Trees with Bandit Feedback via Quantized Gradient Descent☆16Sep 8, 2022Updated 3 years ago
- FLOPs and other statistics COunter for Pytorch neural networks☆23May 27, 2021Updated 4 years ago
- super-resolution; post-training quantization; model compression☆14Nov 10, 2023Updated 2 years ago
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- Fork of Flame repo for training of some new stuff in development☆19Updated this week
- ☆26Nov 8, 2021Updated 4 years ago
- This is th code to FAT method with links to quantized tflite models. (CC BY-NC-ND)☆19Dec 20, 2018Updated 7 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- Code for MSID, a Multi-Scale Intrinsic Distance for comparing generative models, studying neural networks, and more!☆52May 29, 2019Updated 6 years ago
- ☆52Nov 5, 2024Updated last year
- ☆120Mar 18, 2026Updated 3 weeks ago
- NMF/NTF with Pytorch☆17Mar 24, 2019Updated 7 years ago
- [COLM 2025] DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation; 知乎:https://zhuanlan.zhihu.c…☆30Mar 5, 2025Updated last year
- A Learnable LSH Framework for Efficient NN Training☆34Jul 22, 2021Updated 4 years ago
- ☆10Aug 5, 2020Updated 5 years ago
- ☆11Feb 5, 2026Updated 2 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- A collection of optimizers, some arcane others well known, for Flax.☆29Aug 6, 2021Updated 4 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Deep Learning Course, Skoltech, 2024☆16Jun 12, 2024Updated last year
- A framework based on the tensor train decomposition for working with multivariate functions and multidimensional arrays☆65Nov 20, 2025Updated 4 months ago
- ☆12Mar 16, 2022Updated 4 years ago
- ☆14Nov 7, 2025Updated 5 months ago
- Code for reproducing the results from "CrAM: A Compression-Aware Minimizer" accepted at ICLR 2023☆10Mar 1, 2023Updated 3 years ago