BorealisAI / neuzipLinks
Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This repository contains the code for the experiments in the paper.
☆60Updated last year
Alternatives and similar repositories for neuzip
Users that are interested in neuzip are comparing it to the libraries listed below
Sorting:
- Work in progress.☆79Updated 2 months ago
- ☆67Updated 10 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated 3 months ago
- RWKV-7: Surpassing GPT☆104Updated last year
- QuIP quantization☆61Updated last year
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆51Updated 5 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆61Updated last year
- PyTorch implementation of models from the Zamba2 series.☆186Updated last year
- ☆119Updated last month
- ☆71Updated 7 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated last week
- Esoteric Language Models☆111Updated this week
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 9 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆59Updated 8 months ago
- ☆82Updated last year
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆144Updated 2 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆37Updated 4 months ago
- A collection of tricks and tools to speed up transformer models☆194Updated last month
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆59Updated 10 months ago
- A repository for research on medium sized language models.☆77Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆112Updated 8 months ago
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorch☆96Updated 11 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Updated 11 months ago
- ☆64Updated 8 months ago
- ☆52Updated last year
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆84Updated 2 months ago
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆52Updated 3 months ago
- An extention to the GaLore paper, to perform Natural Gradient Descent in low rank subspace☆18Updated last year
- FlashRNN - Fast RNN Kernels with I/O Awareness☆174Updated 3 months ago