character-ai / MuKoe
☆53Updated last year
Alternatives and similar repositories for MuKoe
Users that are interested in MuKoe are comparing it to the libraries listed below
Sorting:
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last week
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆52Updated 3 months ago
- prime-rl is a codebase for decentralized RL training at scale☆89Updated this week
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆98Updated 2 months ago
- ☆61Updated last year
- Simple GRPO scripts and configurations.☆58Updated 3 months ago
- Simple repository for training small reasoning models☆27Updated 3 months ago
- Score LLM pretraining data with classifiers☆55Updated last year
- inference code for mixtral-8x7b-32kseqlen☆100Updated last year
- Train your own SOTA deductive reasoning model☆92Updated 2 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆37Updated last week
- ☆27Updated 10 months ago
- Very minimal (and stateless) agent framework☆43Updated 4 months ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- Latent Large Language Models☆18Updated 8 months ago
- A fast, local, and secure approach for training LLMs for coding tasks using GRPO with WebAssembly and interpreter feedback.☆22Updated last month
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated last week
- ☆38Updated 9 months ago
- ☆22Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆64Updated 3 weeks ago
- Collection of autoregressive model implementation☆85Updated 2 weeks ago
- ☆48Updated 6 months ago
- LLM reads a paper and produce a working prototype☆56Updated last month
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 11 months ago
- ☆43Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Lego for GRPO☆28Updated last month
- Make triton easier☆47Updated 11 months ago