character-ai / MuKoeLinks
☆54Updated last year
Alternatives and similar repositories for MuKoe
Users that are interested in MuKoe are comparing it to the libraries listed below
Sorting:
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 3 weeks ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 6 months ago
- Simple GRPO scripts and configurations.☆59Updated 6 months ago
- ☆39Updated last year
- ☆61Updated last year
- Long context evaluation for large language models☆220Updated 5 months ago
- Simple repository for training small reasoning models☆37Updated 6 months ago
- Commit0: Library Generation from Scratch☆161Updated 3 months ago
- Make triton easier☆47Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆69Updated 4 months ago
- ☆88Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 2 months ago
- RWKV-7: Surpassing GPT☆94Updated 9 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆46Updated 3 months ago
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆112Updated last year
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆96Updated last month
- ☆27Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Train your own SOTA deductive reasoning model☆104Updated 5 months ago
- Score LLM pretraining data with classifiers☆55Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆129Updated 8 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆60Updated this week
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆96Updated last month
- ☆38Updated last year
- inference code for mixtral-8x7b-32kseqlen☆101Updated last year
- Collection of autoregressive model implementation☆86Updated 4 months ago