changjonathanc / flex-nano-vllmLinks
FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.
☆274Updated last month
Alternatives and similar repositories for flex-nano-vllm
Users that are interested in flex-nano-vllm are comparing it to the libraries listed below
Sorting:
- ☆217Updated 7 months ago
- An extension of the nanoGPT repository for training small MOE models.☆187Updated 6 months ago
- Load compute kernels from the Hub☆283Updated this week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 3 months ago
- ☆171Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆335Updated 4 months ago
- Dion optimizer algorithm☆343Updated 2 weeks ago
- ☆638Updated last week
- PyTorch Single Controller☆419Updated this week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆343Updated 9 months ago
- Normalized Transformer (nGPT)☆188Updated 10 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆185Updated 3 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆293Updated 3 weeks ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆401Updated 3 weeks ago
- Decentralized RL Training at Scale☆592Updated this week
- Memory optimized Mixture of Experts☆65Updated last month
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆265Updated last month
- ☆199Updated 8 months ago
- 👷 Build compute kernels☆143Updated this week
- PyTorch building blocks for the OLMo ecosystem☆292Updated this week
- Exploring Applications of GRPO☆249Updated 3 weeks ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆408Updated 6 months ago
- ring-attention experiments☆152Updated 11 months ago
- rl from zero pretrain, can it be done? yes.☆268Updated 3 weeks ago
- ☆428Updated 3 weeks ago
- Open-source framework for the research and development of foundation models.☆439Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 5 months ago
- Tina: Tiny Reasoning Models via LoRA☆282Updated last month
- ☆196Updated 9 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆160Updated 2 months ago