zaydzuhri / pythia-mlkvLinks
Multi-Layer Key-Value sharing experiments on Pythia models
☆34Updated last year
Alternatives and similar repositories for pythia-mlkv
Users that are interested in pythia-mlkv are comparing it to the libraries listed below
Sorting:
- Lottery Ticket Adaptation☆39Updated last year
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆33Updated 4 months ago
- A repository for research on medium sized language models.☆77Updated last year
- Official Repository for Task-Circuit Quantization☆24Updated 8 months ago
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆59Updated 8 months ago
- DPO, but faster 🚀☆46Updated last year
- ☆19Updated 10 months ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- A collection of lightweight interpretability scripts to understand how LLMs think☆89Updated last week
- Linear Attention Sequence Parallelism (LASP)☆88Updated last year
- ☆21Updated last year
- ☆41Updated last year
- ☆67Updated 10 months ago
- ☆70Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆61Updated last year
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated last week
- ☆66Updated 10 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- A specialized RWKV-7 model for Othello(a.k.a. Reversi) that predicts legal moves, evaluates positions, and performs in-context search. It…☆43Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated last week
- Esoteric Language Models☆109Updated 2 months ago
- Resa: Transparent Reasoning Models via SAEs☆47Updated 4 months ago
- [ICML 24 NGSM workshop] Associative Recurrent Memory Transformer implementation and scripts for training and evaluation☆61Updated 2 weeks ago
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆38Updated last year
- Train, tune, and infer Bamba model☆138Updated 7 months ago
- ☆63Updated 7 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Verifiers for LLM Reinforcement Learning☆80Updated 9 months ago
- Fused Qwen3 MoE layer for faster training, compatible with Transformers, LoRA, bnb 4-bit quant, Unsloth. Also possible to train LoRA over…☆229Updated this week