saurabhaloneai / qwen3-expLinks
qwen3 experiments
☆31Updated 2 months ago
Alternatives and similar repositories for qwen3-exp
Users that are interested in qwen3-exp are comparing it to the libraries listed below
Sorting:
- ☆68Updated 4 months ago
- ☆62Updated 2 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆77Updated 6 months ago
- ☆78Updated 9 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆83Updated last month
- ☆32Updated last month
- Simple & Scalable Pretraining for Neural Architecture Research☆294Updated last month
- Lego for GRPO☆29Updated 4 months ago
- Personal project, Generative AI, Streamlit, Python☆54Updated 4 months ago
- Marketplace ML experiment - training without backprop☆25Updated 2 weeks ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆62Updated 10 months ago
- In this repository I have a code and brief explanations of the attempts that I made at the ARC-AGI (2024) challenges :)☆24Updated 10 months ago
- ☆296Updated last month
- look how they massacred my boy☆64Updated 11 months ago
- Repository to create traveling waves integrate special information through time☆55Updated last month
- ☆89Updated 8 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 7 months ago
- GPTQ and efficient search for GGUF☆48Updated last week
- A minimal implementation of DeepMind's Genie world model☆281Updated this week
- Solving data for LLMs - Create quality synthetic datasets!☆150Updated 8 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆99Updated last month
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆33Updated 5 months ago
- An automated tool for discovering insights from research papaer corpora☆139Updated last year
- rl from zero pretrain, can it be done? yes.☆269Updated this week
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 6 months ago
- Plotting (entropy, varentropy) for small LMs☆99Updated 4 months ago
- Inference, Fine Tuning and many more recipes with Gemma family of models☆268Updated 2 months ago
- A mcp server that uses the Osmosis-Apply-1.7B model to apply code merges☆53Updated 2 months ago
- ☆62Updated 11 months ago
- Luth is a state-of-the-art series of fine-tuned LLMs for French☆31Updated last week