vcskaushik / LLMzipLinks
☆56Updated 5 months ago
Alternatives and similar repositories for LLMzip
Users that are interested in LLMzip are comparing it to the libraries listed below
Sorting:
- ☆49Updated 11 months ago
- RWKV-7: Surpassing GPT☆92Updated 7 months ago
- QuIP quantization☆54Updated last year
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated this week
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆81Updated last month
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆127Updated 7 months ago
- GoldFinch and other hybrid transformer components☆45Updated 11 months ago
- Latent Large Language Models☆18Updated 10 months ago
- Simple high-throughput inference library☆120Updated last month
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated last week
- Self-host LLMs with LMDeploy and BentoML☆20Updated this week
- A repository for research on medium sized language models.☆77Updated last year
- ☆82Updated 10 months ago
- ☆59Updated 3 months ago
- ☆45Updated last year
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 2 months ago
- ☆104Updated 2 months ago
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆59Updated 8 months ago
- PB-LLM: Partially Binarized Large Language Models☆152Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆55Updated last year
- ☆79Updated 8 months ago
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆55Updated last month
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆49Updated 3 months ago
- ☆42Updated 9 months ago
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆40Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 9 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 10 months ago
- PyTorch implementation of models from the Zamba2 series.☆183Updated 5 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆127Updated 10 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year