alperiox / Compact-Language-Models-via-Pruning-and-Knowledge-DistillationLinks
Unofficial implementation of https://arxiv.org/pdf/2407.14679
☆44Updated 8 months ago
Alternatives and similar repositories for Compact-Language-Models-via-Pruning-and-Knowledge-Distillation
Users that are interested in Compact-Language-Models-via-Pruning-and-Knowledge-Distillation are comparing it to the libraries listed below
Sorting:
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- Prune transformer layers☆69Updated last year
- A repository for research on medium sized language models.☆76Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 11 months ago
- Codebase accompanying the Summary of a Haystack paper.☆78Updated 8 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated last month
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 8 months ago
- Verifiers for LLM Reinforcement Learning☆56Updated last month
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated this week
- ☆125Updated last year
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆82Updated last year
- ☆50Updated 7 months ago
- Long Context Extension and Generalization in LLMs☆56Updated 8 months ago
- ☆47Updated 9 months ago
- minimal GRPO implementation from scratch☆90Updated 2 months ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆60Updated last year
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆29Updated 3 months ago
- ☆72Updated last month
- Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More☆31Updated 2 weeks ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 3 months ago
- ☆79Updated 4 months ago
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆37Updated 4 months ago
- This is the official repository for Inheritune.☆111Updated 3 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆143Updated 8 months ago
- The official implementation of the paper "Towards Efficient Mixture of Experts: A Holistic Study of Compression Techniques (TMLR)".☆70Updated 2 months ago
- ☆76Updated last year
- Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆90Updated 2 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆69Updated last year
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year