ArminAzizi98 / LaMDALinks
☆15Updated last year
Alternatives and similar repositories for LaMDA
Users that are interested in LaMDA are comparing it to the libraries listed below
Sorting:
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆28Updated 2 months ago
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Updated last year
- ☆29Updated 8 months ago
- ☆23Updated last year
- ☆21Updated last month
- ☆11Updated last year
- Codes for Merging Large Language Models☆35Updated last year
- ☆46Updated 4 months ago
- [ACL 2024 Findings] Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning☆13Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆46Updated last year
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆61Updated 11 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆21Updated last year
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆73Updated 6 months ago
- Official implementation for LaCo (EMNLP 2024 Findings)☆21Updated last year
- ☆19Updated last year
- [NeurIPS 2025] Official implementation of "Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning"☆29Updated 3 months ago
- ☆63Updated 6 months ago
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆25Updated 6 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆67Updated 10 months ago
- ☆20Updated last year
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆55Updated last year
- ☆17Updated 5 months ago
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆113Updated last year
- [ICLR 2026] dParallel: Learnable Parallel Decoding for dLLMs☆58Updated this week
- ☆49Updated last year
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆34Updated 8 months ago
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆22Updated 9 months ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Updated 3 months ago
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models☆39Updated last year