BorealisAI / neuzip
Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This repository contains the code for the experiments in the paper.
☆53Updated 3 months ago
Alternatives and similar repositories for neuzip:
Users that are interested in neuzip are comparing it to the libraries listed below
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆47Updated 7 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆106Updated this week
- A repository for research on medium sized language models.☆76Updated 8 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆149Updated 2 months ago
- ☆107Updated last month
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆51Updated 10 months ago
- PyTorch implementation of models from the Zamba2 series.☆176Updated 3 weeks ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆96Updated 4 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆38Updated last year
- Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆78Updated 2 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆35Updated 9 months ago
- RWKV-7: Surpassing GPT☆79Updated 3 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆24Updated 5 months ago
- QuIP quantization☆50Updated 11 months ago
- ☆75Updated last month
- Train, tune, and infer Bamba model☆84Updated last month
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆116Updated 2 months ago
- ☆59Updated 2 weeks ago
- ☆46Updated last year
- A specialized RWKV-7 model for Othello(a.k.a. Reversi) that predicts legal moves, evaluates positions, and performs in-context search. It…☆38Updated 3 weeks ago
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆33Updated 2 weeks ago
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆114Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆39Updated 8 months ago
- Here we will test various linear attention designs.☆58Updated 9 months ago
- ☆79Updated 3 months ago
- ☆44Updated 3 months ago
- ☆71Updated 6 months ago