matttreed / diloco-simLinks
☆19Updated 5 months ago
Alternatives and similar repositories for diloco-sim
Users that are interested in diloco-sim are comparing it to the libraries listed below
Sorting:
- Simple repository for training small reasoning models☆33Updated 4 months ago
- ☆44Updated last year
- look how they massacred my boy☆63Updated 8 months ago
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆45Updated 2 months ago
- Modded vLLM to run pipeline parallelism over public networks☆37Updated last month
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆101Updated 3 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 4 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆94Updated last month
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated 8 months ago
- Collection of LLM completions for reasoning-gym task datasets☆24Updated last month
- ☆38Updated 10 months ago
- ☆27Updated 11 months ago
- Collection of autoregressive model implementation☆85Updated 2 months ago
- DeMo: Decoupled Momentum Optimization☆188Updated 6 months ago
- Solidity contracts for the decentralized Prime Network protocol☆23Updated last week
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last month
- A collection of optimizers for MLX☆36Updated 3 weeks ago
- An introduction to LLM Sampling☆78Updated 6 months ago
- ☆63Updated last month
- ☆79Updated 10 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 9 months ago
- Lego for GRPO☆28Updated 3 weeks ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆80Updated 2 weeks ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆66Updated 2 months ago
- ☆61Updated last year
- ☆26Updated 5 months ago
- The code repository for the CURLoRA research paper. Stable LLM continual fine-tuning and catastrophic forgetting mitigation.☆44Updated 9 months ago
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆63Updated 7 months ago
- train entropix like a champ!☆20Updated 8 months ago