arcee-ai / pybubbleLinks
☆72Updated last week
Alternatives and similar repositories for pybubble
Users that are interested in pybubble are comparing it to the libraries listed below
Sorting:
- ☆68Updated 7 months ago
- look how they massacred my boy☆63Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆85Updated 4 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆100Updated 5 months ago
- Storing long contexts in tiny caches with self-study☆228Updated last month
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 10 months ago
- Train your own SOTA deductive reasoning model☆107Updated 10 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 8 months ago
- rl from zero pretrain, can it be done? yes.☆286Updated 3 months ago
- explore token trajectory trees on instruct and base models☆150Updated 7 months ago
- SIMD quantization kernels☆93Updated 3 months ago
- Super basic implementation (gist-like) of RLMs with REPL environments.☆293Updated 2 months ago
- ☆40Updated last year
- Ludic – an LLM-RL library for the era of experience☆50Updated last week
- Curated collection of community environments☆196Updated 2 weeks ago
- Official CLI and Python SDK for Prime Intellect - access GPU compute, remote sandboxes, RL environments, and distributed training infrast…☆133Updated this week
- Efficient non-uniform quantization with GPTQ for GGUF☆58Updated 3 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆61Updated last year
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆122Updated 2 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆305Updated last month
- An introduction to LLM Sampling☆79Updated last year
- PageRank for LLMs☆51Updated 3 months ago
- A framework for optimizing DSPy programs with RL☆303Updated this week
- Project code for training LLMs to write better unit tests + code☆21Updated 7 months ago
- MoE training for Me and You and maybe other people☆309Updated this week
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 3 months ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆77Updated 10 months ago
- Marketplace ML experiment - training without backprop☆27Updated 3 months ago
- smolLM with Entropix sampler on pytorch☆149Updated last year