zaydzuhri / pythia-mlkvLinks
Multi-Layer Key-Value sharing experiments on Pythia models
☆34Updated last year
Alternatives and similar repositories for pythia-mlkv
Users that are interested in pythia-mlkv are comparing it to the libraries listed below
Sorting:
- Lottery Ticket Adaptation☆39Updated 10 months ago
- A repository for research on medium sized language models.☆78Updated last year
- DPO, but faster 🚀☆45Updated 10 months ago
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆38Updated last year
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆32Updated last month
- ☆64Updated 6 months ago
- ☆21Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆57Updated 4 months ago
- Linear Attention Sequence Parallelism (LASP)☆87Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆183Updated 2 weeks ago
- ☆19Updated 7 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆48Updated 5 months ago
- ☆43Updated 5 months ago
- Official Repository for Task-Circuit Quantization☆24Updated 4 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated 2 weeks ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- A collection of tricks and tools to speed up transformer models☆182Updated this week
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated 2 months ago
- ☆49Updated 8 months ago
- ☆39Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- ☆67Updated 6 months ago
- ☆58Updated 4 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Cascade Speculative Drafting☆31Updated last year
- RWKV-7: Surpassing GPT☆96Updated 10 months ago
- MPI Code Generation through Domain-Specific Language Models☆14Updated 10 months ago