kabir2505 / tiny-mixtralLinks
☆45Updated 7 months ago
Alternatives and similar repositories for tiny-mixtral
Users that are interested in tiny-mixtral are comparing it to the libraries listed below
Sorting:
- ☆46Updated 8 months ago
- A collection of lightweight interpretability scripts to understand how LLMs think☆74Updated this week
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 8 months ago
- minimal GRPO implementation from scratch☆100Updated 9 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 6 months ago
- An extension of the nanoGPT repository for training small MOE models.☆219Updated 9 months ago
- making the official triton tutorials actually comprehensible☆80Updated 4 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆61Updated last year
- GPU Kernels☆212Updated 8 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆327Updated last month
- ☆465Updated 3 months ago
- Memory optimized Mixture of Experts☆72Updated 5 months ago
- ☆228Updated 11 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆75Updated 7 months ago
- A collection of tricks and tools to speed up transformer models☆193Updated last week
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 9 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 3 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆108Updated 9 months ago
- Simple repository for training small reasoning models☆47Updated 10 months ago
- coding CUDA everyday!☆71Updated 2 weeks ago
- An overview of GRPO & DeepSeek-R1 Training with Open Source GRPO Model Fine Tuning☆37Updated 7 months ago
- 🎈 A series of lightweight GPT models featuring TinyGPT Base (~51M params) and TinyGPT-MoE (~85M params). Fast, creative text generation …☆15Updated 3 weeks ago
- Distributed training (multi-node) of a Transformer model☆90Updated last year
- ☆225Updated last month
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆31Updated 10 months ago
- working implimention of deepseek MLA☆45Updated 11 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆305Updated 3 weeks ago
- Building LLaMA 4 MoE from Scratch☆70Updated 8 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆444Updated 9 months ago