kabir2505 / tiny-mixtralLinks
☆45Updated 5 months ago
Alternatives and similar repositories for tiny-mixtral
Users that are interested in tiny-mixtral are comparing it to the libraries listed below
Sorting:
- ☆46Updated 6 months ago
- minimal GRPO implementation from scratch☆98Updated 6 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆68Updated 4 months ago
- ☆203Updated 9 months ago
- 🎈 A series of lightweight GPT models featuring TinyGPT Base (~51M params) and TinyGPT-MoE (~85M params). Fast, creative text generation …☆12Updated 2 weeks ago
- ☆437Updated last month
- working implimention of deepseek MLA☆44Updated 8 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 5 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 4 months ago
- A collection of lightweight interpretability scripts to understand how LLMs think☆56Updated last week
- An overview of GRPO & DeepSeek-R1 Training with Open Source GRPO Model Fine Tuning☆36Updated 4 months ago
- A collection of tricks and tools to speed up transformer models☆182Updated this week
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆59Updated last year
- GPU Kernels☆198Updated 5 months ago
- making the official triton tutorials actually comprehensible☆54Updated last month
- An extension of the nanoGPT repository for training small MOE models.☆195Updated 6 months ago
- Memory optimized Mixture of Experts☆68Updated 2 months ago
- Building LLaMA 4 MoE from Scratch☆64Updated 5 months ago
- Quantized LLM training in pure CUDA/C++.☆32Updated last week
- ☆76Updated last week
- Implementation of a GPT-4o like Multimodal from Scratch using Python☆72Updated 6 months ago
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆30Updated 7 months ago
- ☆222Updated this week
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆70Updated last month
- ☆64Updated 6 months ago
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆59Updated 11 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 8 months ago
- Low memory full parameter finetuning of LLMs☆53Updated 2 months ago
- ☆45Updated 4 months ago
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆159Updated last year