ArminAzizi98 / LaMDALinks
☆15Updated last year
Alternatives and similar repositories for LaMDA
Users that are interested in LaMDA are comparing it to the libraries listed below
Sorting:
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆28Updated 3 weeks ago
- ☆19Updated last year
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆61Updated 9 months ago
- ☆29Updated 6 months ago
- ☆10Updated last year
- dParallel: Learnable Parallel Decoding for dLLMs☆49Updated 2 months ago
- ☆21Updated 2 weeks ago
- [ACL 2024 Findings] Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning☆13Updated last year
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Updated last year
- ☆62Updated 5 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆28Updated last year
- Official implementation for LaCo (EMNLP 2024 Findings)☆20Updated last year
- ☆23Updated last year
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆24Updated 5 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆52Updated last year
- Code and Model for NeurIPS 2024 Spotlight Paper "Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training…☆44Updated last year
- ☆19Updated 11 months ago
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆68Updated 5 months ago
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆33Updated 6 months ago
- ☆45Updated 2 months ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆23Updated 2 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆17Updated last year
- [NeurIPS 2025] Official implementation of "Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning"☆26Updated 2 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆46Updated last year
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆21Updated last year
- ☆26Updated 3 weeks ago
- ☆17Updated 4 months ago
- ☆112Updated 3 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆98Updated last year