kernelmachine / balanced-kmeansLinks
☆21Updated last year
Alternatives and similar repositories for balanced-kmeans
Users that are interested in balanced-kmeans are comparing it to the libraries listed below
Sorting:
- ☆143Updated last year
- Kinetics: Rethinking Test-Time Scaling Laws☆86Updated 6 months ago
- Long Context Extension and Generalization in LLMs☆62Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆103Updated 7 months ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆65Updated last year
- Code for the preprint "Cache Me If You Can: How Many KVs Do You Need for Effective Long-Context LMs?"☆47Updated 6 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Updated 10 months ago
- ☆64Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆109Updated 3 months ago
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆24Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆58Updated last year
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆69Updated last year
- Beyond KV Caching: Shared Attention for Efficient LLMs☆20Updated last year
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆71Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆147Updated last year
- ☆77Updated last year
- ☆51Updated last year
- ☆63Updated 7 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆44Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆79Updated last year
- ☆108Updated last year
- ☆35Updated last year
- ☆19Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆39Updated last year
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 9 months ago
- ☆112Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Updated last year
- Using FlexAttention to compute attention with different masking patterns☆47Updated last year
- Stick-breaking attention☆62Updated 7 months ago