hkproj / multi-latent-attentionLinks
☆45Updated 7 months ago
Alternatives and similar repositories for multi-latent-attention
Users that are interested in multi-latent-attention are comparing it to the libraries listed below
Sorting:
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 7 months ago
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆49Updated last year
- ☆46Updated 9 months ago
- Notebooks for fine tuning pali gemma☆117Updated 8 months ago
- An extension of the nanoGPT repository for training small MOE models.☆225Updated 10 months ago
- Prune transformer layers☆74Updated last year
- GPU Kernels☆218Updated 8 months ago
- Fine tune Gemma 3 on an object detection task☆95Updated 5 months ago
- Notebook and Scripts that showcase running quantized diffusion models on consumer GPUs☆38Updated last year
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆328Updated 2 months ago
- ☆114Updated 4 months ago
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆31Updated 10 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆119Updated 2 years ago
- From scratch implementation of a vision language model in pure PyTorch☆252Updated last year
- Collection of autoregressive model implementation☆85Updated this week
- A repository consisting of paper/architecture replications of classic/SOTA AI/ML papers in pytorch☆399Updated 2 months ago
- Building GPT ...☆18Updated last year
- Set of scripts to finetune LLMs☆38Updated last year
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆86Updated 4 months ago
- Distributed training (multi-node) of a Transformer model☆90Updated last year
- ☆233Updated last year
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆79Updated 7 months ago
- ☆224Updated last month
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆61Updated last year
- ⏰ AI conference deadline countdowns☆309Updated this week
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆53Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆351Updated 8 months ago
- code for training & evaluating Contextual Document Embedding models☆202Updated 8 months ago
- "LLM from Zero to Hero: An End-to-End Large Language Model Journey from Data to Application!"☆141Updated 2 weeks ago