kabir2505 / tiny-mixtralLinks
☆39Updated last month
Alternatives and similar repositories for tiny-mixtral
Users that are interested in tiny-mixtral are comparing it to the libraries listed below
Sorting:
- working implimention of deepseek MLA☆41Updated 4 months ago
- So, I trained a Llama a 130M architecture I coded from ground up to build a small instruct model from scratch. Trained on FineWeb dataset…☆14Updated 2 months ago
- ☆46Updated 2 months ago
- ☆35Updated last week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆67Updated 2 months ago
- Fine tune Gemma 3 on an object detection task☆43Updated this week
- Collection of autoregressive model implementation☆85Updated last month
- Notebook and Scripts that showcase running quantized diffusion models on consumer GPUs☆38Updated 7 months ago
- making the official triton tutorials actually comprehensible☆34Updated 2 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last month
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 4 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆184Updated last week
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated last year
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆58Updated last week
- ☆168Updated 5 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆66Updated last month
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- An overview of GRPO & DeepSeek-R1 Training with Open Source GRPO Model Fine Tuning☆32Updated 2 weeks ago
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆44Updated 8 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆100Updated 2 months ago
- ☆47Updated 9 months ago
- ☆30Updated last month
- Distributed training (multi-node) of a Transformer model☆68Updated last year
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆29Updated 3 months ago
- ☆57Updated last week
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆69Updated 2 weeks ago
- minimal GRPO implementation from scratch☆90Updated 2 months ago
- A collection of tricks and tools to speed up transformer models☆166Updated last week
- Set of scripts to finetune LLMs☆37Updated last year
- Building large language foundational model☆9Updated 2 months ago