zaydzuhri / pythia-mlkvLinks
Multi-Layer Key-Value sharing experiments on Pythia models
☆33Updated last year
Alternatives and similar repositories for pythia-mlkv
Users that are interested in pythia-mlkv are comparing it to the libraries listed below
Sorting:
- Lottery Ticket Adaptation☆39Updated 9 months ago
- A repository for research on medium sized language models.☆78Updated last year
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem”☆30Updated 2 months ago
- DPO, but faster 🚀☆44Updated 8 months ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last month
- ☆19Updated 5 months ago
- ☆38Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated 2 weeks ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆39Updated 2 weeks ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- Verifiers for LLM Reinforcement Learning☆71Updated 4 months ago
- ☆66Updated 4 months ago
- Resa: Transparent Reasoning Models via SAEs☆41Updated 2 weeks ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 10 months ago
- Official Implementation of APB (ACL 2025 main Oral)☆31Updated 6 months ago
- Official Repository for Task-Circuit Quantization☆22Updated 2 months ago
- RWKV-7: Surpassing GPT☆94Updated 9 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆56Updated last year
- ☆24Updated 11 months ago
- Linear Attention Sequence Parallelism (LASP)☆86Updated last year
- Official implementation of "Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning"☆21Updated 3 months ago
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated 11 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- GoldFinch and other hybrid transformer components☆46Updated last year
- ☆66Updated last year
- ☆51Updated 2 months ago
- ☆41Updated last year
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆56Updated 2 months ago