arcee-ai / pybubbleLinks
☆63Updated this week
Alternatives and similar repositories for pybubble
Users that are interested in pybubble are comparing it to the libraries listed below
Sorting:
- ☆68Updated 5 months ago
- look how they massacred my boy☆63Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated 3 months ago
- craft post-training data recipes☆60Updated last week
- Train your own SOTA deductive reasoning model☆107Updated 8 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated last month
- ☆40Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 7 months ago
- Storing long contexts in tiny caches with self-study☆216Updated last month
- Super basic implementation (gist-like) of RLMs with REPL environments.☆255Updated last month
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆97Updated 4 months ago
- lossily compress representation vectors using product quantization☆59Updated 3 weeks ago
- Simple & Scalable Pretraining for Neural Architecture Research☆300Updated 3 weeks ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆62Updated last year
- LLMProc: Unix-inspired runtime that treats LLMs as processes.☆34Updated 4 months ago
- SIMD quantization kernels☆92Updated 2 months ago
- Project code for training LLMs to write better unit tests + code☆21Updated 6 months ago
- A collection of lightweight interpretability scripts to understand how LLMs think☆66Updated this week
- smolLM with Entropix sampler on pytorch☆150Updated last year
- Lego for GRPO☆30Updated 5 months ago
- NSA Triton Kernels written with GPT5 and Opus 4.1☆65Updated 3 months ago
- explore token trajectory trees on instruct and base models☆148Updated 5 months ago
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsembles☆60Updated 6 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆107Updated 8 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆111Updated last month
- Official CLI and Python SDK for Prime Intellect - access GPU compute, remote sandboxes, RL environments, and distributed training infrast…☆110Updated this week
- An introduction to LLM Sampling☆79Updated 11 months ago
- rl from zero pretrain, can it be done? yes.☆280Updated last month
- ☆21Updated 10 months ago
- Plotting (entropy, varentropy) for small LMs☆98Updated 6 months ago