kabir2505 / tiny-mixtralLinks
☆44Updated 4 months ago
Alternatives and similar repositories for tiny-mixtral
Users that are interested in tiny-mixtral are comparing it to the libraries listed below
Sorting:
- ☆46Updated 5 months ago
- minimal GRPO implementation from scratch☆96Updated 6 months ago
- working implimention of deepseek MLA☆44Updated 8 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 5 months ago
- An extension of the nanoGPT repository for training small MOE models.☆185Updated 6 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆71Updated 4 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 3 months ago
- ☆199Updated 8 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆68Updated 3 months ago
- A collection of tricks and tools to speed up transformer models☆178Updated last week
- GPU Kernels☆193Updated 4 months ago
- Collection of autoregressive model implementation☆86Updated 4 months ago
- ☆428Updated 2 weeks ago
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆118Updated 4 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆405Updated 6 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆56Updated last year
- ☆69Updated last week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆274Updated last month
- Memory optimized Mixture of Experts☆65Updated last month
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆158Updated last year
- rl from zero pretrain, can it be done? yes.☆265Updated 3 weeks ago
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆30Updated 6 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 6 months ago
- ☆217Updated 7 months ago
- Simple repository for training small reasoning models☆40Updated 7 months ago
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆60Updated 10 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 7 months ago
- Building LLaMA 4 MoE from Scratch☆64Updated 5 months ago
- Low memory full parameter finetuning of LLMs☆53Updated last month
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆68Updated last month