zaydzuhri / pythia-mlkvLinks
Multi-Layer Key-Value sharing experiments on Pythia models
☆34Updated last year
Alternatives and similar repositories for pythia-mlkv
Users that are interested in pythia-mlkv are comparing it to the libraries listed below
Sorting:
- Lottery Ticket Adaptation☆40Updated 11 months ago
- A repository for research on medium sized language models.☆78Updated last year
- Official Repository for Task-Circuit Quantization☆24Updated 4 months ago
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆57Updated 5 months ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- ☆19Updated 7 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- DPO, but faster 🚀☆45Updated 10 months ago
- ☆21Updated last year
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆32Updated last month
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆59Updated last year
- RWKV-7: Surpassing GPT☆98Updated 11 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- GoldFinch and other hybrid transformer components☆45Updated last year
- [EMNLP 2025] The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆101Updated 2 months ago
- Linear Attention Sequence Parallelism (LASP)☆87Updated last year
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆197Updated last week
- ☆67Updated 7 months ago
- ☆68Updated last year
- Verifiers for LLM Reinforcement Learning☆77Updated 6 months ago
- ☆49Updated 8 months ago
- Official Implementation of APB (ACL 2025 main Oral)☆31Updated 8 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated 3 weeks ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆99Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆35Updated 2 weeks ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated last week
- A specialized RWKV-7 model for Othello(a.k.a. Reversi) that predicts legal moves, evaluates positions, and performs in-context search. It…☆42Updated 9 months ago
- ☆55Updated 4 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 4 months ago