joey00072 / Multi-Head-Latent-Attention-MLA-Links
working implimention of deepseek MLA
☆45Updated 10 months ago
Alternatives and similar repositories for Multi-Head-Latent-Attention-MLA-
Users that are interested in Multi-Head-Latent-Attention-MLA- are comparing it to the libraries listed below
Sorting:
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆108Updated 8 months ago
- Collection of autoregressive model implementation☆86Updated 7 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 7 months ago
- DeMo: Decoupled Momentum Optimization☆197Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆60Updated last year
- ☆66Updated 8 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 7 months ago
- ☆46Updated 8 months ago
- RWKV-7: Surpassing GPT☆101Updated last year
- H-Net Dynamic Hierarchical Architecture☆80Updated 2 months ago
- Focused on fast experimentation and simplicity☆75Updated 11 months ago
- ☆136Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 10 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆100Updated 6 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated last year
- ☆24Updated 6 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated last month
- A collection of lightweight interpretability scripts to understand how LLMs think☆68Updated last week
- Cerule - A Tiny Mighty Vision Model☆68Updated 3 weeks ago
- ☆40Updated last year
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆85Updated 2 months ago
- Lego for GRPO☆30Updated 6 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Updated 8 months ago
- A Qwen .5B reasoning model trained on OpenR1-Math-220k☆14Updated last month
- research impl of Native Sparse Attention (2502.11089)☆63Updated 9 months ago
- look how they massacred my boy☆63Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆131Updated last month
- An introduction to LLM Sampling☆79Updated 11 months ago
- Lightweight package that tracks and summarizes code changes using LLMs (Large Language Models)☆34Updated 9 months ago
- Set of scripts to finetune LLMs☆38Updated last year