zaydzuhri / pythia-mlkvLinks
Multi-Layer Key-Value sharing experiments on Pythia models
☆34Updated last year
Alternatives and similar repositories for pythia-mlkv
Users that are interested in pythia-mlkv are comparing it to the libraries listed below
Sorting:
- Lottery Ticket Adaptation☆39Updated 9 months ago
- Official Repository for Task-Circuit Quantization☆23Updated 3 months ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated last week
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆31Updated 2 weeks ago
- Official Implementation of APB (ACL 2025 main Oral)☆31Updated 6 months ago
- A repository for research on medium sized language models.☆77Updated last year
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆56Updated 3 months ago
- DPO, but faster 🚀☆44Updated 9 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last month
- ☆41Updated last year
- ☆64Updated 5 months ago
- Verifiers for LLM Reinforcement Learning☆72Updated 5 months ago
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆172Updated last week
- ☆20Updated last year
- ☆39Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆38Updated last year
- Linear Attention Sequence Parallelism (LASP)☆86Updated last year
- GoldFinch and other hybrid transformer components☆45Updated last year
- ☆67Updated 5 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆88Updated 4 months ago
- Train, tune, and infer Bamba model☆132Updated 3 months ago
- [ICML 24 NGSM workshop] Associative Recurrent Memory Transformer implementation and scripts for training and evaluation☆51Updated 2 weeks ago
- ☆42Updated 4 months ago
- Official implementation of Regularized Policy Gradient (RPG) (https://arxiv.org/abs/2505.17508)☆37Updated this week
- ☆57Updated 4 months ago
- Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆40Updated last month
- ☆19Updated 6 months ago