unslothai / cut-cross-entropyLinks
Apple's Cut Cross Entropy
☆21Updated 9 months ago
Alternatives and similar repositories for cut-cross-entropy
Users that are interested in cut-cross-entropy are comparing it to the libraries listed below
Sorting:
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last month
- ☆77Updated 2 months ago
- Simple repository for training small reasoning models☆40Updated 8 months ago
- OLMost every training recipe you need to perform data interventions with the OLMo family of models.☆50Updated last week
- minimal GRPO implementation from scratch☆98Updated 7 months ago
- Train, tune, and infer Bamba model☆134Updated 4 months ago
- A repository for research on medium sized language models.☆78Updated last year
- PyTorch implementation of models from the Zamba2 series.☆185Updated 8 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆56Updated 5 months ago
- ☆39Updated last year
- Collection of autoregressive model implementation☆86Updated 5 months ago
- ☆55Updated 11 months ago
- Verifiers for LLM Reinforcement Learning☆76Updated 6 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- ☆93Updated 4 months ago
- Small and Efficient Mathematical Reasoning LLMs☆72Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 9 months ago
- accompanying material for sleep-time compute paper☆117Updated 5 months ago
- Implementation of SOAR☆42Updated last month
- QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.☆24Updated last month
- Train your own SOTA deductive reasoning model☆108Updated 7 months ago
- ☆76Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆342Updated 10 months ago
- Implementation of Infini-Transformer in Pytorch☆113Updated 9 months ago
- ☆67Updated last year
- ☆107Updated last year
- A pipeline for LLM knowledge distillation☆109Updated 6 months ago
- MatFormer repo☆63Updated 10 months ago