UtkarshSaxena1 / EigenAttnView external linksLinks
☆20Oct 13, 2024Updated last year
Alternatives and similar repositories for EigenAttn
Users that are interested in EigenAttn are comparing it to the libraries listed below
Sorting:
- [TMLR] CoDeC: Communication-Efficient Decentralized Continual Learning☆12Apr 17, 2024Updated last year
- ☆33Mar 28, 2025Updated 10 months ago
- [Deep Unlearning-PyTorch] Class Forgetting as in paper "Deep Unlearning: Fast and Efficient Training-free Approach to Controlled Forgetti…☆15Jul 26, 2024Updated last year
- This repository contains bash scripts for launching, orchestrating, managing, and monitoring jobs on Purdue's RCAC clusters.☆22Dec 22, 2025Updated last month
- ☆15Nov 7, 2024Updated last year
- From Accuracy to Robustness: A Study of Rule- and Model-based Verifiers in Mathematical Reasoning.☆24Oct 7, 2025Updated 4 months ago
- Official [AAAI] Code Repository for "Continual Learning with Scaled Gradient Projection".☆16Jun 28, 2023Updated 2 years ago
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Jul 18, 2024Updated last year
- The official implementation of "Well Begun is Half Done: Low-resource Preference Alignment by Weak-to-Strong Decoding"☆23Jun 26, 2025Updated 7 months ago
- (NeurIPS 2024) BiDM: Pushing the Limit of Quantization for Diffusion Models☆22Nov 20, 2024Updated last year
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆21May 28, 2024Updated last year
- Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation, ICML 2024☆22Jun 26, 2024Updated last year
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆29Nov 22, 2025Updated 2 months ago
- ☆47Oct 2, 2025Updated 4 months ago
- ☆35Feb 10, 2025Updated last year
- Source code of paper: A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models. (ICML 2025)☆35Apr 2, 2025Updated 10 months ago
- The first spoken long-text dataset derived from live streams, designed to reflect the redundancy-rich and conversational nature of real-w…☆13Jun 28, 2025Updated 7 months ago
- A Text2SQL benchmark for evaluation of Large Language Models☆41Updated this week
- ☆54Jul 7, 2025Updated 7 months ago
- Source code of paper ''KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing''☆31Oct 24, 2024Updated last year
- Implementation of DOTIE - Detecting Objects through Temporal Isolation of Events using a Spiking Architecture☆33Sep 27, 2023Updated 2 years ago
- [NeurIPS ENLSP Workshop'24] CSKV: Training-Efficient Channel Shrinking for KV Cache in Long-Context Scenarios☆16Oct 18, 2024Updated last year
- ☆18Jun 10, 2025Updated 8 months ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆30Nov 24, 2024Updated last year
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Mar 7, 2025Updated 11 months ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆36Nov 17, 2024Updated last year
- xKV: Cross-Layer SVD for KV-Cache Compression☆43Nov 30, 2025Updated 2 months ago
- A Framework for Evaluating AI Agent Safety in Realistic Environments☆30Oct 2, 2025Updated 4 months ago
- [ICML 2024] Official Repository for the paper "Transformers Get Stable: An End-to-End Signal Propagation Theory for Language Models"☆10Jul 19, 2024Updated last year
- Symphony — A decentralized multi-agent framework that enables intelligent agents to collaborate seamlessly across heterogeneous edge devi…☆30Oct 30, 2025Updated 3 months ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆37Sep 24, 2024Updated last year
- The official implement of paper 《DaMo: Data Mixing Optimizer in Fine-tuning Multimodal LLMs for Mobile Phone Agents》☆28Oct 23, 2025Updated 3 months ago
- VideoNSA: Native Sparse Attention Scales Video Understanding☆78Nov 16, 2025Updated 2 months ago
- ☆11Jun 22, 2025Updated 7 months ago
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆155Jan 14, 2026Updated last month
- [NeurIPS 2025] Think Silently, Think Fast: Dynamic Latent Compression of LLM Reasoning Chains☆76Jul 29, 2025Updated 6 months ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆38Feb 27, 2024Updated last year
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆42Oct 28, 2025Updated 3 months ago
- AdaMoLE: Adaptive Mixture of LoRA Experts☆38Oct 11, 2024Updated last year