tzafon / Tzafon-WayPointLinks
Tzafon-WayPoint is a robust, scalable solution for managing large fleets of browser instances. WayPoint stands out with unmatched cold‑start speed—launching up to a 1000 browser per second on standard GCP hardware.
☆82Updated 8 months ago
Alternatives and similar repositories for Tzafon-WayPoint
Users that are interested in Tzafon-WayPoint are comparing it to the libraries listed below
Sorting:
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆768Updated this week
- Official CLI and Python SDK for Prime Intellect - access GPU compute, remote sandboxes, RL environments, and distributed training infrast…☆117Updated this week
- Curated collection of community environments☆195Updated this week
- Async RL Training at Scale☆950Updated this week
- ☆131Updated 11 months ago
- Testing baseline LLMs performance across various models☆330Updated 3 weeks ago
- Plotting (entropy, varentropy) for small LMs☆99Updated 7 months ago
- The State Of The Art, intelligence☆157Updated 4 months ago
- Aidan Bench attempts to measure <big_model_smell> in LLMs.☆314Updated 5 months ago
- A framework for optimizing DSPy programs with RL☆298Updated last month
- ☆234Updated 5 months ago
- ⚖️ Awesome LLM Judges ⚖️☆146Updated 7 months ago
- smol models are fun too☆92Updated last year
- Super basic implementation (gist-like) of RLMs with REPL environments.☆286Updated 2 months ago
- ComplexTensor: Machine Learning By Bridging Classical and Quantum Computation☆79Updated last year
- Inference-time scaling for LLMs-as-a-judge.☆316Updated last month
- ☆68Updated 6 months ago
- Claude Deep Research config for Claude Code.☆222Updated 9 months ago
- look how they massacred my boy☆63Updated last year
- rl from zero pretrain, can it be done? yes.☆282Updated 2 months ago
- ☆136Updated 9 months ago
- ☆308Updated last week
- Build your own visual reasoning model☆415Updated last month
- MoE training for Me and You and maybe other people☆239Updated this week
- ☆67Updated 5 months ago
- ☆115Updated 2 weeks ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆99Updated 5 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆453Updated last year
- smolLM with Entropix sampler on pytorch☆149Updated last year
- Train your own SOTA deductive reasoning model☆107Updated 9 months ago