maiush / OpenCharacterTrainingLinks
Open Character Training
☆63Updated last month
Alternatives and similar repositories for OpenCharacterTraining
Users that are interested in OpenCharacterTraining are comparing it to the libraries listed below
Sorting:
- Official repo for Learning to Reason for Long-Form Story Generation☆73Updated 8 months ago
- ☆40Updated last year
- smolLM with Entropix sampler on pytorch☆149Updated last year
- A collection of lightweight interpretability scripts to understand how LLMs think☆88Updated last week
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- An introduction to LLM Sampling☆79Updated last year
- ☆59Updated last month
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- look how they massacred my boy☆63Updated last year
- ☆55Updated last year
- ☆93Updated 2 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆100Updated 5 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆174Updated 11 months ago
- Training code for Sparse Autoencoders on Embedding models☆39Updated 10 months ago
- Losslessly encode text natively with arithmetic coding and HuggingFace Transformers☆76Updated 2 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 10 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆65Updated 8 months ago
- Simple GRPO scripts and configurations.☆59Updated 11 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆125Updated 3 months ago
- A reading list of relevant papers and projects on foundation model annotation☆28Updated 10 months ago
- OLMost every training recipe you need to perform data interventions with the OLMo family of models.☆63Updated this week
- EvaByte: Efficient Byte-level Language Models at Scale☆114Updated 8 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆111Updated 8 months ago
- Train your own SOTA deductive reasoning model☆107Updated 10 months ago
- Storing long contexts in tiny caches with self-study☆229Updated last month
- ☆40Updated 8 months ago
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆34Updated 8 months ago
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- Plotting (entropy, varentropy) for small LMs☆99Updated 7 months ago
- accompanying material for sleep-time compute paper☆118Updated 8 months ago