PrimeIntellect-ai / genesysLinks
☆128Updated 3 months ago
Alternatives and similar repositories for genesys
Users that are interested in genesys are comparing it to the libraries listed below
Sorting:
- Train your own SOTA deductive reasoning model☆96Updated 4 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 5 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆173Updated 4 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆71Updated 3 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆65Updated 2 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 11 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆41Updated 2 months ago
- accompanying material for sleep-time compute paper☆97Updated 2 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated 9 months ago
- prime-rl is a codebase for decentralized async RL training at scale☆362Updated this week
- ☆64Updated last month
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆207Updated this week
- ☆69Updated last month
- ☆41Updated 5 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆103Updated 2 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆94Updated 2 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆140Updated 4 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆229Updated 8 months ago
- ☆117Updated 4 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆54Updated 5 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆91Updated last month
- Long context evaluation for large language models☆219Updated 4 months ago
- Open source interpretability artefacts for R1.☆154Updated 2 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆149Updated 5 months ago
- Plotting (entropy, varentropy) for small LMs☆97Updated last month
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆101Updated 4 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆211Updated this week
- Simple GRPO scripts and configurations.☆59Updated 5 months ago
- look how they massacred my boy☆63Updated 8 months ago
- Scaling Data for SWE-agents☆283Updated this week