zaydzuhri / pythia-mlkvLinks
Multi-Layer Key-Value sharing experiments on Pythia models
☆34Updated last year
Alternatives and similar repositories for pythia-mlkv
Users that are interested in pythia-mlkv are comparing it to the libraries listed below
Sorting:
- Lottery Ticket Adaptation☆40Updated last year
- Official Repository for Task-Circuit Quantization☆24Updated 5 months ago
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆33Updated 2 months ago
- DPO, but faster 🚀☆46Updated 11 months ago
- ☆19Updated 8 months ago
- A repository for research on medium sized language models.☆78Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Official Implementation of APB (ACL 2025 main Oral)☆31Updated 9 months ago
- Resa: Transparent Reasoning Models via SAEs☆44Updated 2 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆37Updated last month
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆57Updated 5 months ago
- ☆55Updated 5 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆60Updated last year
- ☆32Updated last month
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆38Updated last year
- OLMost every training recipe you need to perform data interventions with the OLMo family of models.☆52Updated this week
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated this week
- ☆46Updated 6 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- ☆67Updated 7 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆57Updated last week
- Linear Attention Sequence Parallelism (LASP)☆87Updated last year
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆208Updated 2 weeks ago
- ☆52Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- Verifiers for LLM Reinforcement Learning☆79Updated 7 months ago
- ☆68Updated last year
- [EMNLP 2025] The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆100Updated 2 months ago
- ☆21Updated last year