hkproj / multi-latent-attentionLinks
☆40Updated last month
Alternatives and similar repositories for multi-latent-attention
Users that are interested in multi-latent-attention are comparing it to the libraries listed below
Sorting:
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆188Updated last month
- ☆46Updated 3 months ago
- GPU Kernels☆188Updated 2 months ago
- Fine tune Gemma 3 on an object detection task☆69Updated this week
- Notebooks for fine tuning pali gemma☆111Updated 3 months ago
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆47Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆160Updated 4 months ago
- A repository consisting of paper/architecture replications of classic/SOTA AI/ML papers in pytorch☆309Updated 3 weeks ago
- Collection of autoregressive model implementation☆85Updated 2 months ago
- ☆27Updated 9 months ago
- From scratch implementation of a vision language model in pure PyTorch☆227Updated last year
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆63Updated last month
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated 9 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆68Updated 2 months ago
- ☆198Updated 5 months ago
- ☆96Updated last month
- RAGs: Simple implementations of Retrieval Augmented Generation (RAG) Systems☆123Updated 5 months ago
- Notebook and Scripts that showcase running quantized diffusion models on consumer GPUs☆38Updated 8 months ago
- ☆179Updated 6 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆378Updated 4 months ago
- minimal GRPO implementation from scratch☆92Updated 4 months ago
- "LLM from Zero to Hero: An End-to-End Large Language Model Journey from Data to Application!"☆30Updated this week
- making the official triton tutorials actually comprehensible☆45Updated 3 months ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆197Updated last year
- rl from zero pretrain, can it be done? we'll see.☆65Updated 3 weeks ago
- Building GPT ...☆18Updated 7 months ago
- Google TPU optimizations for transformers models☆114Updated 5 months ago
- working implimention of deepseek MLA☆42Updated 6 months ago
- This repo has all the basic things you'll need in-order to understand complete vision transformer architecture and its various implementa…☆227Updated 6 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆110Updated last year