tzafon / Tzafon-WayPointLinks
Tzafon-WayPoint is a robust, scalable solution for managing large fleets of browser instances. WayPoint stands out with unmatched cold‑start speed—launching up to a 1000 browser per second on standard GCP hardware.
☆82Updated 8 months ago
Alternatives and similar repositories for Tzafon-WayPoint
Users that are interested in Tzafon-WayPoint are comparing it to the libraries listed below
Sorting:
- Curated collection of community environments☆200Updated this week
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆782Updated this week
- Official CLI and Python SDK for Prime Intellect - access GPU compute, remote sandboxes, RL environments, and distributed training infrast…☆133Updated this week
- Plotting (entropy, varentropy) for small LMs☆99Updated 7 months ago
- ComplexTensor: Machine Learning By Bridging Classical and Quantum Computation☆79Updated last year
- ☆235Updated last week
- rl from zero pretrain, can it be done? yes.☆286Updated 3 months ago
- Async RL Training at Scale☆985Updated this week
- smol models are fun too☆93Updated last year
- The State Of The Art, intelligence☆157Updated 4 months ago
- Super basic implementation (gist-like) of RLMs with REPL environments.☆390Updated this week
- Aidan Bench attempts to measure <big_model_smell> in LLMs.☆315Updated 6 months ago
- ☆131Updated last year
- ☆136Updated 9 months ago
- Inference-time scaling for LLMs-as-a-judge.☆320Updated 2 months ago
- ☆116Updated last week
- Testing baseline LLMs performance across various models☆332Updated last week
- smolLM with Entropix sampler on pytorch☆149Updated last year
- ⚖️ Awesome LLM Judges ⚖️☆148Updated 8 months ago
- ☆68Updated 7 months ago
- look how they massacred my boy☆63Updated last year
- Claude Deep Research config for Claude Code.☆225Updated 9 months ago
- Storing long contexts in tiny caches with self-study☆229Updated last month
- A framework for optimizing DSPy programs with RL☆303Updated this week
- ☆67Updated 6 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆100Updated 5 months ago
- ☆113Updated 3 months ago
- Train your own SOTA deductive reasoning model☆107Updated 10 months ago
- MoE training for Me and You and maybe other people☆315Updated last week
- Marketplace ML experiment - training without backprop☆27Updated 4 months ago