joey00072 / Multi-Head-Latent-Attention-MLA-Links
working implimention of deepseek MLA
☆44Updated 9 months ago
Alternatives and similar repositories for Multi-Head-Latent-Attention-MLA-
Users that are interested in Multi-Head-Latent-Attention-MLA- are comparing it to the libraries listed below
Sorting:
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆107Updated 7 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 6 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆59Updated last year
- Collection of autoregressive model implementation☆86Updated 6 months ago
- ☆64Updated 7 months ago
- DeMo: Decoupled Momentum Optimization☆194Updated 10 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated last year
- ☆46Updated 6 months ago
- RWKV-7: Surpassing GPT☆98Updated 11 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 5 months ago
- ☆136Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 9 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆68Updated 5 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆92Updated 5 months ago
- look how they massacred my boy☆63Updated last year
- PyTorch implementation of models from the Zamba2 series.☆185Updated 9 months ago
- Exploring Applications of GRPO☆248Updated 2 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆127Updated 2 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆248Updated 8 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Updated 7 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆103Updated 2 weeks ago
- rl from zero pretrain, can it be done? yes.☆277Updated 3 weeks ago
- H-Net Dynamic Hierarchical Architecture☆80Updated last month
- A collection of lightweight interpretability scripts to understand how LLMs think☆61Updated this week
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated last week
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- ☆24Updated 5 months ago
- Fine-tunes a student LLM using teacher feedback for improved reasoning and answer quality. Implements GRPO with teacher-provided evaluati…☆46Updated 5 months ago
- Normalized Transformer (nGPT)☆192Updated 11 months ago
- A Qwen .5B reasoning model trained on OpenR1-Math-220k☆14Updated 2 weeks ago