allen4747 / Ferret
This is the official implementation for the paper: Ferret: Federated Full-Parameter Tuning at Scale for Large Language Models
☆13Updated 8 months ago
Alternatives and similar repositories for Ferret
Users that are interested in Ferret are comparing it to the libraries listed below
Sorting:
- ☆17Updated 4 months ago
- ☆26Updated last month
- ☆37Updated 7 months ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆59Updated last year
- ☆15Updated 6 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆26Updated 6 months ago
- Knowledge Unlearning for Large Language Models☆26Updated last week
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆89Updated 11 months ago
- Exploring Model Kinship for Merging Large Language Models☆24Updated last month
- ☆20Updated 2 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆67Updated 3 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆41Updated last year
- Official repository of "Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions", ICLR 2024 Sp…☆20Updated last year
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆23Updated 10 months ago
- ☆78Updated 4 months ago
- A block pruning framework for LLMs.☆23Updated 10 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆37Updated 10 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆39Updated 11 months ago
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated 9 months ago
- Implementation for PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs☆22Updated 11 months ago
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆69Updated last year
- Code for "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"☆68Updated 7 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 3 weeks ago
- A curated list of Model Merging methods.☆92Updated 8 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆47Updated 2 months ago
- [ICML‘2024] "LoCoCo: Dropping In Convolutions for Long Context Compression", Ruisi Cai, Yuandong Tian, Zhangyang Wang, Beidi Chen☆16Updated 8 months ago
- ☆92Updated 7 months ago
- ☆22Updated last month
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆50Updated 10 months ago
- Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More☆31Updated this week