saurabhaloneai / qwen3-expLinks
qwen3 experiments
☆34Updated 6 months ago
Alternatives and similar repositories for qwen3-exp
Users that are interested in qwen3-exp are comparing it to the libraries listed below
Sorting:
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆85Updated 5 months ago
- ☆68Updated 8 months ago
- Lego for GRPO☆30Updated 8 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 10 months ago
- Testing paligemma2 finetuning on reasoning dataset☆18Updated last year
- Efficient non-uniform quantization with GPTQ for GGUF☆58Updated 4 months ago
- ☆79Updated last year
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆88Updated 10 months ago
- rl from zero pretrain, can it be done? yes.☆286Updated 4 months ago
- ☆62Updated 6 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆307Updated last month
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 3 months ago
- ☆19Updated 10 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆61Updated last year
- Official CLI and Python SDK for Prime Intellect - access GPU compute, remote sandboxes, RL environments, and distributed training infrast…☆143Updated this week
- Tensor-Slayer : Manipulate weights and tensors of LLMs to achieve performance upgrades and introduce a novel inferenceless mechanistic in…☆27Updated 8 months ago
- A truly open version of gpt-oss which shows the entire pre-training from scratch☆85Updated 4 months ago
- Solving data for LLMs - Create quality synthetic datasets!☆151Updated last year
- ☆57Updated 11 months ago
- Marketplace ML experiment - training without backprop☆27Updated 4 months ago
- look how they massacred my boy☆63Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 9 months ago
- EXO Gym is an open-source Python toolkit that facilitates distributed AI research.☆94Updated last month
- ☆159Updated last month
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 4 months ago
- working implimention of deepseek MLA☆45Updated last year
- ☆73Updated 3 weeks ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆101Updated 6 months ago
- Repository to create traveling waves integrate special information through time☆56Updated 5 months ago
- ☆87Updated last year