hkproj / multi-latent-attentionLinks
β35Updated last week
Alternatives and similar repositories for multi-latent-attention
Users that are interested in multi-latent-attention are comparing it to the libraries listed below
Sorting:
- Fine tune Gemma 3 on an object detection taskβ43Updated this week
- Complete implementation of Llama2 with/without KV cache & inference πβ46Updated last year
- β46Updated 2 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understandβ184Updated last week
- Building GPT ...β17Updated 6 months ago
- An introduction to LLM Samplingβ78Updated 5 months ago
- Notebooks for fine tuning pali gemmaβ107Updated last month
- Collection of autoregressive model implementationβ85Updated last month
- β39Updated 3 weeks ago
- working implimention of deepseek MLAβ41Updated 4 months ago
- β130Updated 9 months ago
- A repository consisting of paper/architecture replications of classic/SOTA AI/ML papers in pytorchβ196Updated last month
- Notebook and Scripts that showcase running quantized diffusion models on consumer GPUsβ38Updated 7 months ago
- Set of scripts to finetune LLMsβ37Updated last year
- making the official triton tutorials actually comprehensibleβ34Updated 2 months ago
- So, I trained a Llama a 130M architecture I coded from ground up to build a small instruct model from scratch. Trained on FineWeb datasetβ¦β14Updated 2 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creationβ111Updated 8 months ago
- NanoGPT-speedrunning for the poor T4 enjoyersβ66Updated last month
- GPU Kernelsβ178Updated last month
- I learn about and explain quantizationβ26Updated last year
- β168Updated 5 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizationsβ58Updated last week
- β19Updated 11 months ago
- Fine-tune an LLM to perform batch inference and online serving.β111Updated 3 weeks ago
- β93Updated last week
- β123Updated 7 months ago
- Prune transformer layersβ69Updated last year
- Quantization of LLMs and benchmarking.β10Updated last year
- rl from zero pretrain, can it be done? we'll see.β24Updated this week
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absoluteβ¦β49Updated 10 months ago