hkproj / multi-latent-attentionLinks
☆45Updated 5 months ago
Alternatives and similar repositories for multi-latent-attention
Users that are interested in multi-latent-attention are comparing it to the libraries listed below
Sorting:
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 4 months ago
- ☆46Updated 6 months ago
- Fine tune Gemma 3 on an object detection task☆86Updated 3 months ago
- GPU Kernels☆203Updated 5 months ago
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆48Updated last year
- Notebooks for fine tuning pali gemma☆117Updated 6 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆296Updated 2 months ago
- Building GPT ...☆18Updated 10 months ago
- Notebook and Scripts that showcase running quantized diffusion models on consumer GPUs☆38Updated 11 months ago
- An extension of the nanoGPT repository for training small MOE models.☆202Updated 7 months ago
- ☆222Updated 3 weeks ago
- A repository consisting of paper/architecture replications of classic/SOTA AI/ML papers in pytorch☆383Updated 3 weeks ago
- ☆209Updated 9 months ago
- From scratch implementation of a vision language model in pure PyTorch☆244Updated last year
- "LLM from Zero to Hero: An End-to-End Large Language Model Journey from Data to Application!"☆134Updated 2 weeks ago
- ☆107Updated last month
- working implimention of deepseek MLA☆44Updated 9 months ago
- Set of scripts to finetune LLMs☆38Updated last year
- Collection of autoregressive model implementation☆86Updated 6 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated last year
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆421Updated 7 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 6 months ago
- Load compute kernels from the Hub☆304Updated last week
- Prune transformer layers☆69Updated last year
- ☆86Updated this week
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆30Updated 8 months ago
- code for training & evaluating Contextual Document Embedding models☆198Updated 5 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆85Updated last month
- minimal GRPO implementation from scratch☆98Updated 7 months ago
- ☆29Updated last year